So what's going to happen in 3 years after these startup bros have left government, none of the frameworks they're using are supported any more, and nobody in the office that they parachuted into is trained to maintain whatever spaghetti they crapped out over three months of all-nighters? There's a reason that we don't build critical infrastructure by giving it to some guy whose entire accomplishments are "working at Airbnb for 10 years"
You cast too broad a brush! Having worked at Airbnb for 10 years would have been fine. The problem is DOGE was staffed by twenty year olds. How would that have worked? They started at AirBnB when they were 10?
DCR is cool, but I haven't seen anyone roll it out. I know it has to be enabled per-tenant in Okta and Azure (which nobody does), and I don't think Google Workspace supports it at all yet. It's a shame that OIDC spent so long and got so much market-share tied to OAuth client secrets, especially since classic OpenID had no such configuration step.
This is because the MCP folks focus almost entirely on the client developer experience at the expense of implementability
CIMD is a bit better and will supplant DCR, but overall there's yet to be a good flow here that supports management in enterprise use cases.
- I don't see an idempotency key in the request to authorize a charge; that might be something nice for people looking to build reliable systems on this.
- How long are accessTokens valid? Forever? Do they become invalid if the subject metadata (firstName, lastName, email) changes?
I think this is a super-cool idea, but I think the idea of extending net30 terms to every customer of some B2C product seems pretty iffy; since you're deferring charging until the end of the month, you won't get most of the fraud signals from Stripe until then and anything popular that used this system seems like it'd be pretty inundated with fraud. I would at least consider doing the charges more frequently (i.e., charge at the end of the month or every $50, whichever comes first) to put a better bound on how long you can go before finding out that someone gave you a stolen card.
We run Stripe Radar and 3-D Secure when adding a card (before first use), which filters out a lot of obvious fraud (and 3DS often shifts liability to card networks in many regions).
The balances are not settled just at the end of the month. Each customer has a "maximum owed limit", which starts low (currently 10 USD) and grows with successful payments (up to 30 USD currently). The customer is charged as soon as they hit that limit (with some grace to allow for continued use).
Idempotency keys are on the near-term roadmap. Access tokens do not currently expire; however, they can be revoked by the customer at any time.
I really like the growing maximum owed limit idea and it would be really interesting to also make it possible to set an auto payment threshold lower than it.
The idea being something like "charge at X € owed but let it go up to N*X € if a payment fails before suspending service" where the N scales with something like the number of paid invoices or even total past spend.,
The primary objective of the max-owed limit is to cap per-customer risk.
If you are suggesting that the max-owed limit is actually N×X, then that would multiply worst-case exposure by N, which is undesirable.
If you are suggesting that we charge the customer when they owe X while their max-owed limit is N×X, this would be worse for the customer, since they would pay `N × (X × variable_rate_fee + fixed_rate_fee)` instead of `N×X×variable_rate_fee + fixed_rate_fee` in payment processing fees.
If your payment processor takes a per-transaction fee (not all do) then yes, this is a slightly worse deal for the customer. However, I think this would still be a good choice to give them, even if many would probably not chose it in order to save a few cents.
If I burn 1 € per day and my max owed based on whatever risk assessment is 20 €, I can set my payment treshold to 15 €, meaning if a payment fails, I have 5 days to fix it and settle the debt before you suspend my access. If the trigger amount is the same as the max owed, I have zero time (well, presumably there is already some wiggle room for the time it takes to process the transaction).
I see what you mean — yes, this could be useful to some customers. We already implement a small grace amount above the max-owed limit that allows for continued service; your idea would essentially allow the customer to increase the default grace amount.
A better model I had in mind works like this: customers purchase tokens in any amount they choose. Companies then charge for their services using these tokens through the platform's APIs. At the end of each month, settlements are made based on the total token value. The smallest token unit could be as small as one-millionth of a dollar.
It’s similar to a digital wallet, but without currency conversion: customers cannot exchange tokens back into money.
That approach generally doesn't work from a legal perspective: prepaid tokens are often treated as e-money (especially if it's not for company's own products or services), and in many jurisdictions, holding value for users requires an e-money/money transmitter license.
In EU, this depends mainly on the question of exchange/interchangeability: If you sell them as vouchers and do not allow redeeming/payout in the original cash, its not a problem.
The key legal issue is interchangeability. Single-merchant vouchers are generally acceptable. If a voucher can be used across multiple merchants, it's often treated as e-money in the EU. Not being able to use funds across multiple merchants would significantly reduce the value for customers, as they would no longer be able to share payment processing fees across merchants.
I kind of expected this, though not want this way :( ... it seems governments will go to any extent to prevent creation of alternative source of value other than the one they can fully control... for good mostly, bad at other times..
Surely you want any company that offers a prepaid credit card to be regulated, so that you can be extremely sure they won't just take your money and run.
And what really is the difference between a prepaid credit card and prepaid credits you can use at a large selection of tech companies. (Legally there is no distinction)
The problem is the constant charge part that comes along with the percentage part of commissions that payment processors companies takes.
Spending 100$ in 1 dollar each transactions mean I end up spending extra 30$, on top of the percentage charges.
A system based on tokens only takes the percentage part(as expected), but the constant part is added just once.
It opens up per-request charging model, across service providers.
This benefits both: the consumers for obvious reasons, and sellers since now customers don't have to "commit" to subscription or a large charge for a servive they may or may not use or continue.
> anything popular that used this system seems like it'd be pretty inundated with fraud
I coined "micropayments means microfraud"; I would expect this to have similar situations to the AWS mystery bill problem, but on a tiny scale. If you can charge customers without their confirmation it's easy to run up bills. And of course the amounts are so tiny you can't afford dispute resolution.
Yes, merchant abuse is a risk. What we do and plan to do:
- Each merchant requires an OAuth grant, and customers can revoke it at any time.
- A customer ledger shows what, when, and how much each merchant charged. This can be shown in the customer's dashboard and monthly statement emails.
- Customers have account-level spending caps to limit exposure. We will add per-merchant caps.
- If patterns look off or we get complaints, we can pause new charges and review.
I think this is ignoring a lot of prior art. Our deploys at Yelp in roughly 2010 worked this way -- you flagged a branch as ready to land, a system (`pushmaster` aka `pushhamster`) verified that it passed tests and then did an octopus merge of a bunch of branches, verified that that passed tests, deployed it, and then landed the whole thing to master after it was happy on staging. And this wasn't novel at Yelp; we inherited the practice from PayPal, so my guess is that most companies that care at all about release engineering have been doing it this way for decades and it was just a big regression when people stopped having professional release management teams and started just cowboy pushing to `master` / `main` on github some time in the mid 2010's.
That's super interesting, thanks for sharing the Yelp/PayPal lineage. You're right: there's probably a lot of prior art in internal release engineering systems that never got much written up publicly.
The angle we took in the blog post focused on what was widely documented and accessible to the community (open-source tools like Bors, Homu, Bulldozer, Zuul, etc.), because those left a public footprint that other teams could adopt or build on.
It's a great reminder that many companies were solving the "keep main green" problem in parallel (some with pretty sophisticated tooling), even if it didn't make it into OSS or blog posts at the time.
Paypal got it from eBay, which in 2000's was rolling out 20M LOC worldwide every week or two on "release trains". There, a small team of kernel engineers rotated doing the merging -- two weeks of clearcase hell when it was your turn.
And, since eBay wrote their own developer tools, you'd have to deploy different tooling depending on the branch you were on. But because of their custom tooling, if there was a problem in the UI, in debug mode you could select an element in the browser UI and navigate to the java class in a particular component and branch that produced that element.
For most users, that'll just result in them going to Google, searching for the name of your business, and then clicking the first link blindly. At that point you're trusting that there's no malicious actors squatting on your business name's keyword -- and if you're at all an interesting target, there's definitely malvertising targeting you.
The only real solution is to have domain-bound identities like passkeys.
It's really "cool" when you get vendors like 6sense that combine browser fingerprinting with semi-licit data brokers to do full deanonymization of visitor traffic. Why bother doing marketing when you can just get a report of the name, email address, mailing address, and creditworthiness of every person who's visited your website?
I've seen people argue with a straight face that these tools and their reports don't run afoul of GDPR/CCPA because they don't involve information that a user gave you on purpose, so it's not protected. Ghouls, all of them.
It does seem like there's something wrong with that data; I find it somewhat implausible that the average parent was only caring for their child for 1.7 hours a day in 1985; even if you assume that all of the tween and teens were free-range and only got an hour or two of parenting a day, little kids have always required nonstop attention to make sure that they're not actively dying.
Although... the infant mortality rate in the US has dropped by more than 50% since 1985, so who knows...
Yeah, I've wondered if there is some sort change in how people think about and label their activities. Would a 1950s parent even think of themselves as doing a defined activity called "childcare"? Or rather, the children are just around, as the parent is doing things. If I am cooking dinner while a toddler putters around the floor and a baby is in a high-chair eating scraps I give him, am I doing "childcare"? Would a 1950s parent think of that as doing "childcare"?
Toddlers don’t just putter around. They want to be wherever you’re at doing whatever you’re doing and opening all the cabinets and boxes and pulling everything out to look at it. I think people were more apt to put them to work around the house in the past whereas now people infantilize them more. My son doesn’t speak very well as a 19 month old but he understands a lot and pays attention, and right now we’re trying to figure out how to put him to work in the kitchen and around the house so he feels involved and we get what little help he is able to contribute.
I was born in '83 and I'd say this mostly describes my upbringing. We were left to our own devices the vast majority of the time. By the time I hit my teens, most days I'd barely see my parents at all. At some point you've got kids raising other kids as the parents are absent.
So 75% lives outside of it. Yeah I'd say the majority lives this way and to live otherwise is an exception for the remaining 25%. And even within those top 10 some are more like what I describe. There are definitely parts of those metros where the "mile a minute" travel estimation from uncongested highways applies. Certainly true for philadelphia outside the ~50sq miles of the gridded central city. Places like Houston average home is only like 250k pretty much at parity with midwest prices.
> *Up to 15–50% slower decision-making in offices with high flicker (and high CO₂)
just throws me right off the argument in an article when the fine print notes that a cited study is confounding the thing the author cares about ("sensitivty to flicker") with a much simpler and better-understood explanation (CO₂ poisoning)
I tried my best but once it starts citing different sources as providing hard numbers and then not linking to the sources... and of course they're selling something. Might need a citation on that claim that iPhones don't use PWM.
I had to return an iPhone because the oleds they use flicker so bad. The only iPhones that don’t ficker are the SEs because they use ond school LCD screens. But of course they got rid of the SE. So now I’m stuck on this old SE3 until I can find a different phone that doesn’t flicker because as of now ALL iPhones flicker.
The frequency of 239 Hz is relatively low, so sensitive users will likely notice flickering and experience eyestrain at the stated brightness setting and below.
There are reports that some users are still sensitive to PWM at 500 Hz and above, so be aware.
> The frequency of 239 Hz is relatively low, so sensitive users will likely notice flickering and experience eyestrain at the stated brightness setting and below.
Do you have a source for this claim that 239 Hz is low enough to be noticeable by some measurable fraction of people? People report being sensitive to all kinds of things that end up repeatedly failing to reproduce empirically when it's put to the test (e.g. WiFi and MSG), so that there's a PWM sensitivity subreddit is not the evidence that TFA thinks it is.
The source that TFA links to backing up the idea that between 5% and 20% of people are sensitive to PWM flickering is a Wikipedia article which links to a Scientific American article which does not contain the cited numbers, and even if it did the study it discusses was researching the significantly slower 100 Hz flickering of fluorescent bulbs.
They mentioned in Results section:
> For the median viewer, flicker artifacts disappear only over 500 Hz, many times the commonly reported flicker fusion rate.
Yes, that's an interesting source that at least shows that our eyes can perceive things at those high frequencies, but I'm not sold that it generalizes.
The study actually demonstrates that perception of flicker for regular PWM does in fact trail off at about 65 Hz and is only perceptible when they create the high-frequency edge by alternating left/right instead of alternating the whole image at once.
It looks like the situation they're trying to recreate is techniques like frame rate control/temporal dithering [0], and since this article is now 10 years old, it's unclear if the "modern" displays that they're talking about are now obsolete or if they actually did become the displays that we're dealing with today. From what I can find OLED displays do not tend to use temporal dithering and neither do nicer LCDs: it looks like a trick employed by cheap LCDs to avoid cleaner methods of representing color.
It's an interesting study, but I don't think it redeems TFA, which isn't about the risks of temporal dithering but instead claims harms for PWM in the general case, which the study you linked shows is not perceived above 65 Hz without additional display trickery.
What they are trying to do is recreating the situation that is more similar to actual light sources in computer screens and TV (varying flickering rate from different pixels/ areas). They are saying that the current threshold of 65Hz commonly reported are tested on light sources that are uniformed in flickering, which is not the case for actual screens. It is not about dithering.
Basically the claim is that when there are varying flickering frequency, the requirements for non-flicker frequency is much higher.
No, they're specifically contrasting two types of displays and identify that the traditional way of measuring flicker effect does work for the traditional displays, regardless of image complexity:
> Traditional TVs show a sequence of images, each of which looks almost like the one just before it and each of these images has a spatial distribution of light intensities that resembles the natural world. The existing measurements of a relatively low critical flicker fusion rate are appropriate for these displays.
> In contrast, modern display designs include a sequence of coded fields which are intended to be perceived as one frame. This coded content is not a sequence of natural images that each appears similar to the preceding frame. The coded content contains unnatural sequences such as an image being followed by its inverse.
What's unclear to me 10 years down the road is if the type of display they're worried about is common now or obsolete. "Modern" in 2015 could be the same as what we have today, or the problems the study identified could have been fixed already by displays that we would call "modern" from our reference frame.
I don't know enough about display tech to comment on that, but they're very clear that if your display is showing frames in sequence without any weird trickery that the research method that gets you a 65 Hz refresh rate is a valid way to test for visible flickering.
EDIT: Here's another quote that makes the contrast that they're setting out even more clear:
> The light output of modern displays may at no point of time actually resemble a natural scene. Instead, the codes rely on the fact that at a high enough frame rate human perception integrates the incoming light, such that an image and its negative in rapid succession are perceived as a grey field. This paper explores these new coded displays, as opposed to the traditional sort which show only a sequence of nearly identical images.
It's possible that this is actually a thing that modern displays have been doing this whole time and I didn't even know it, but it's also possible that this was some combination of cutting-edge tech and cost-saving techniques that you mostly don't need to worry about with a (to us) modern OLED.
That is just the motivation, the experiment is much more general and is not related to display technology:
> The work presented here attempts to clarify “the rate at which human perception cannot distinguish between modulated light and a stable field
Otherwise, they would have tested the dithering directly for the full image. Here they are testing a more simpler model: varying flickering causes higher flickering-free requirements (due to eye movements). This would applies to dithering, but potentially other situations.
You can't say "that is just the motivation", because the motivation is what dictated the terms of the experiment. I read the whole study: the contrast between the two types of displays permeates the whole thing.
They repeatedly say that the goal is to measure the effect of flickering in these non-traditional displays and repeatedly say that for displays that do not do the display trickery they're concerned about the traditional measurement methods are sufficient.
You're correct that they do demonstrate that the study shows that the human eye can identify flickering at high framerates under certain conditions, but it also explicitly shows that under normal conditions of one-frame-after-another with blank frames in between for PWM dimming the flickering is unnoticeable after 65 Hz. They go out of their way to prove that before proceeding with the test of the more complicated display which they say was meant to emulate something like a 3D display or similar.
So... yes. Potentially other situations could trigger the same visibility (I'd be very concerned about VR glasses after reading this), but that's a presumption, not something demonstrated by the study. The study as performed explicitly shows that regular PWM is not perceptible as flicker above the traditionally established range of frame rates, and the authors repeatedly say that the traditional measurement methods are entirely "appropriate" for traditional displays that render plain-image frames in sequence.
EDIT: Just to put this quote down again, because it makes the authors' point abundantly clear:
> The light output of modern displays may at no point of time actually resemble a natural scene. Instead, the codes rely on the fact that at a high enough frame rate human perception integrates the incoming light, such that an image and its negative in rapid succession are perceived as a grey field. This paper explores these new coded displays, as opposed to the traditional sort which show only a sequence of nearly identical images.
They explicitly call out that the paper does not apply to traditional displays that show a sequence of nearly identical images.
The paper's motivation is to explore the new coded display, and they are doing that by exploring an aspect that they care about. That aspect is very specifically well-defined, and if you want to show whether a display has the same effect or not, then we need to look into it. But at no point is the experiment itself relating to any kind of display tech.
I mean, they are not even using a screen during the study, they are using a projector. How are you going to even make the claim that this is display technology specific when it is not using a display?!
Did you actually read the study? I assumed you did and so I read every word so I could engage with you on it, but it's really feeling like you skimmed it looking for it to prove what you thought it would prove. It's not even all that long, and it's worth reading in full to understand what they're saying.
I started to write out another comment but it ended up just being a repeat of what I wrote above. Since we're going in circles I think I'm going to leave it here. Read the study, or at least read the extracts that I put above. They don't really leave room for ambiguity.
Edit: I dropped the points on the details, just to focus on the main point. Rest assured that I read the paper, I was arguing in good faith, and that after a bit more thinking I understand your criticism of my interpretation. I don’t think the criticism of the research being unable to generalized is warranted, considering the experimental design. But we aren’t going to agree on that. The difference in our thinking seems to be the probability of the similar effect showing up in daily lives. I know the projector was emulating the coded display, but my point is that it was reasonably easy to do it, and the same setup could conceivably show up easily in different way. Not to mention that the researchers specifically said all the displays in their office had the effect, so it is common within displays itself.
I think if we continue talking, we will keep running in circles. So let’s drop the details on research: it is there, we can both read it. Here is what I was trying to convey since the beginning:
- If you think the (original) article is an ads, with the writing not up to scientific standard: sure, I am ambivalent about the article itself
- If you think the gist of the article and their recommendation is wrong, I mildly disagree with you
- If you think led-flickering affecting people is in the same ballpark of concern about Wifi or GMOs, I violently disagree with you.
LEDs are new, and so the high frequency related research are not too numerous, but for the few exist, they generally point to a higher threshold of perceiving than previously thought. As for the health-effect, I believe that part is more extrapolation than researched (since those can only come after the more generic research on perceiving). So the final assessment is: how bad was the article in presenting information the way they did.
> People report being sensitive to all kinds of things that end up repeatedly failing to reproduce empirically when it's put to the test (e.g. WiFi and MSG), so that there's a PWM sensitivity subreddit is not the evidence that TFA thinks it is.
I know you’re looking for large sample size data, but PWM sensitivity absolutely exists, and I wish it didn’t. The way my eyes hurt in less than a minute while looking at an OLED phone (when I can handle an LCD phone for hours just fine) is too “obvious”. This occurs even on screens I didn’t know were OLED till I got a headache, btw.
(I’m also insanely sensitive to noticing flicker and strobe lights - I say a light flickers, everyone disagrees, I pull out the 240fps mode on my phone… and I’ve never been proven wrong till now.)
I used a 240 Hz PWM-dimmed monitor for a couple of years and I adjusted, but when I switched to a flicker-free one, it was very noticeable and bothersome to use the old one. Even though it's not perceptible when looking at a fixed point, when moving one's eyes around the after-images are easy to see. Even 1000 Hz PWM-dimmed LED strips are easy to notice when looking around the room. The light is basically being panned across one's retina like a panoramic camera/oscilloscope, logging its brightness versus time.
Even if the iPhone was flicker free, holding the iPhone itself throws all that out the window with all the addictive colors and notifications and badges
Running a parser for a network protocol as root seems like a pretty unnecessarily dumb thing to do. I can't really imagine why any part of airplay would need to run as root; maybe something to do with DRM? Although the DRM daemon `fairplayd` runs as a limited-privilege user `_fpsd`, so maybe not. So bizarre that Apple makes all these cool systems to sandbox code, and creates dozens of privilege-separated users on macOS, and then runs an HTTP server doing plists parsing as an unsandboxed root process.
Apple have reworked Airplay so many times at this point the entire thing is just a massive pile of technical debt piled on another massive pile of technical debt, piled on a bunch of weird hacks to try and keep all the devices built for previous versions afloat.
reply