We also shouldn't call it "vegan leather" when it is in fact just plastic.
Naming departs from technical accuracy when adopted by the masses, as they retrofit their common understanding. Wouldn't be too surprised if "vaccine" ends up covering other strong defense-boosters.
Mycelium is neat, but last time I heard of it the problem was far, far too low manufacturing throughput.
I don't think anyone would even consider marketing that as "vegan leather", as doing so would mean putting you in the same bucket as cheap-as-dirt polyurethane (which is what regular "vegan leather" is), at an astronomically higher price. You'd pick a new term to differentiate.
I found it funny because the opposite direction, people accused Tesla of naming “autopilot” misleadingly, because it gave them the impression of fully unattended self-driving.
In aviation, autopilot features were until recently (and still for GA pilots) essentially just cruise control: maintain this speed and heading, maintain this climb rate and heading, maintain this bank angle, etc.
Wouldn't be too surprised, either - but I still think there's merit in using words in a more precise manner than the marketing department would like to do.
A good example for the discussion: leather being animal skin which obviously cannot come from a mushroom.
Assuming you were countering my vegan leather claim: Products marketed "vegan leather" is polyurethane or similar, and for marketing reasons you would use a different term if you did something fancier to differentiate. My gut feeling is that a mycelium-based product would be far more expenisive than simple polyurethane, and quite an upsell.
Not sure what "baking your own bread" means if you are using wheat grown by someone else in an oven that you didn't build that is run with electricity you didn't created from your muscles' force. You haven't even contributed to the nuclear fusion which created the oxygen for the water molecules you've been using! How dare you, standing of the shoulders of giants!
Is it "building your own oven" if you go to Lowe's, buy an oven, and installed it yourself? You've done some work, but your integrating a pre-built appliance into your kitchen, not built your own oven
This would been that more competition would be good for the environment because it would drive down prices and margins, and thus the incentive to overproduce. But this rule actually decreases the competitive pressure and increases margins because market exit barriers = market entry barriers
I know that guy from listmonk! I always thought the frontend could use some love and planned to spend a couple of days on contributing a couple of ideas, but I never came around. Now I know why things are as they are :)
tl;dr - author cites a study from early 2025 which measured developer speed of “experienced open source developers” to be ~20% slower when supported by AI, while they’ve estimated to be ~20% faster.
Note: the study used sonnet-3.5 and sonnet-3.7; there weren’t any agents, deep research or similar tools available. I’d like to see this study done again with:
1. juniors ans mid-level engineers
2. opus-4.6 high and codex-5.2 xhigh
3. Tasks that require upfront research
4. Tasks that require stakeholder communication, which can be facilitated by AI
I’d be thrilled if that AI could finally make one of our most annoying stakeholders test the changes they were so eager to fast track, but hey, I might be surprised.
It can facilitate that, certainly. Idk about the background of that stakeholder, but AI can help drafting communication with the right tone to show the necessity. It can help to write a guide on how to properly test the specific feature. It can write e2e tests that the stakeholder could execute from their environment.
Of course, all of that can be done by humans, too. But this discussion is about average speed of a developer, and there’s a reason many companies employ product owners for the stakeholder communication.
> A study from METR found that when developers used AI tools, they estimated that they were working 20% faster, yet in reality they worked 19% slower. That is nearly a 40% difference between perceived and actual times!
It’s not. It’s either 33% slower than perceived or perception overestimates speed by 50%. I don’t know how to trust the author if stuff like this is wrong.
> I don’t know how to trust the author if stuff like this is wrong.
She's not wrong.
A good way to do this calculation is with the log-ratio, a centered measure of proportional difference. It's symmetric, and widely used in economics and statistics for exactly this reason. I.e:
so if the numbers were “99% slower than without AI but they thought they would be 99% fast”, you’d call that “they were 529% slower”, even though it doesn’t make sense to be more than 100% slower? And you’d not only expect everyone to understand that, but you really think it’s more likely a random person on the internet used a logarithmic scale than they just did bad math?
I get caught up personally in this math as well. Is a charitable interpretation of the throwaway line that they were off by that many “percentage points”?
Their math is 120%-80%=40% while the correct math is (80-120)/120=-33% or (120-80)/80=+50%
It’s more obvious if you take more extreme numbers, say: they estimated to take 99% less time with AI, but it took 99% more time - the difference is not 198%, but 19900%. Suddenly you’re off by two orders of magnitude.
Yes and if was done with people using cursor at the time and already had a few caveats back then about who was actually experienced with the tool etc.
Still an interesting observation. It was also on brown field open source projects which imo explains a bit why people building new stuff have vastly different experiences than this.
No. The point here is that Google is not paid for the ads, so are not incentivised to make the service more addictive. This seems obvious: it’s not the ads we have a problem with per se—- it’s the distortion of they attention economy they entail.
Clearly any scheme will not be perfect but these sort of objections either seem to misunderstand the core issue, or to be willfully confusing by raising irrelevant details.
reply