If companies are taking raw materials worth more than zero, and turning them into clothing worth less than zero, then I think deterring them from doing that is beneficial to society overall.
If they knew in advance that the clothing wouldn't sell, they would never have made it!
But companies stockpile goods in anticipation of potential demand. For example, they'll "overproduce" winter coats because some winters are colder than average. This sort of anti-overproduction law means that the next time there's an unexpected need -- for example an unusually cold winter -- there will be a shortage because there won't be any warehouses full of "just in case" inventory.
They could, but it’s a tradeoff. Inventory costs money and if you cut production, that means laying off workers and possibly selling productive assets, at which point it becomes more expensive to scale production back up.
Every business decision is a tradeoff. Smart government interventions in the economy add weight to that tradeoff to reflect externalities not otherwise accounted for; this is how cap-and-trade on SO2 emissions works. Hamfisted government interventions set hard and fast rules that ignore tradeoffs and lead to unintended consequences.
I don't think this is accurate. It's more that the textiles are produced in Asia and transported in containers.
Due to the high shipping costs, they err on the side of filling up the containers to cover the fixed cost. After selling the clothes, there might be enough clothes left over to fill shipping containers to return the clothes, but they will be clothes from different brands and manufacturers.
It would require extraordinary coordination on both the origin and destination country to return the clothes to the manufacturer where they could add the left over clothes to the next batch that is being shipped out to a different country.
Do we really need warehouses full of "just in case" inventory? It's not life or death, it's just slightly more profitable for companies to overproduce than it is for them to attempt to meet demand exactly.
Climate change is coming, fast and brutal. I'm okay with these multi-billion-dollar revenue companies making a few points less in profits, if it means slowing climate change by even a fraction of a fraction of a point.
They don't need those profits. But our children need a viable planet.
Companies can't meet demand exactly, no matter what profit margin they take, because it's not possible to predict demand exactly. Biasing towards overproduction is how you minimize the risk of shortages when there's a bit more demand than you expected.
as far as a market clearing problem goes, we should be forcing them to sell it at lower and lower prices, or even going to negative and payoyng people to take it off their hands.
supply and demand is that an oversupply makes prices fall, rather than driving artificial scarcity
Well, it sounds like that's what the EU is going to try. My guess is that the manufacturers are mostly destroying stuff for economically rational reasons, and will respond with production cuts leading to that same artificial scarcity from a consumer perspective.
(Although the original commenter would say, I suspect, that it's perfectly OK if there are minor consumer shortages in luxury goods for the sake of the climate.)
> This sort of anti-overproduction law means that the next time there's an unexpected need -- for example an unusually cold winter -- there will be a shortage because there won't be any warehouses full of "just in case" inventory.
Clothes are something extremely overabundant in the EU. And even if they weren't, the unexpected overdemand will result in just using your old coat another year or buying one you like less. Workers are being unnecessarily exploited and resources are being unnecessarily wasted... so I think nudging companies in the right direction is way overdue. Will it work the way EU thinks? Probably not. Just like GDPR was well-intended, but the result is higher entry barrier to new companies and a bunch of annoying popups. But I'd argue that's a result of "not enough" regulation rather than "too much". Companies caught abusing our data should have been outright banned IMHO.
What about cases where 2 pieces of clothing when bundled together have value due to making it more efficient for people to find the right size, but over the right size is found the other becomes waste? A company can't prevent a consumer from ruining the wasted clothes.
But that is how physical stores currently work, where you can try the stuff on, before you buy it? If you care about this, you can of course take the upper one to try on, like all do and then buy the lowest one. But you wash the clothing anyways before actually wearing them, so it doesn't really matter. Honestly I don't get your point.
This thread is talking about vibe coding, not LLM-assisted human coding.
The defining feature of vibe coding is that the human prompter doesn't know or care what the actual code looks like. They don't even try to understand it.
You might instruct the LLM to add test cases, and even tell it what behavior to test. And it will very likely add something that passes, but you have to take the LLM's word that it properly tests what you want it to.
The issue I have with using LLM's is the test code review. Often the LLM will make a 30 or 40 line change to the application code. I can easily review and comprehend this. Then I have to look at the 400 lines of generated test code. While it may be easy to understand there's a lot of it. Go through this cycle several times a day and I'm not convinced I'm doing a good review of the test code do to mental fatigue, who knows what I may be missing in the tests six hours into the work day?
> This thread is talking about vibe coding, not LLM-assisted human coding.
I was writing about vibe-coding. It seems these guys are vibe-coding (https://factory.strongdm.ai/) and their LLM coders write the tests.
I've seen this in action, though to dubious results: the coding (sub)agent writes tests, runs them (they fail), writes the implementation, runs tests (repeat this step and last until tests pass), then says it's done. Next, the reviewer agent looks at everything and says "this is bad and stupid and won't work, fix all of these things", and the coding agent tries again with the reviewer's feedback in mind.
Models are getting good enough that this seems to "compound correctness", per the post I linked. It is reasonable to think this is going somewhere. The hard parts seem to be specification and creativity.
That sounds like it's basically impossible to implement your own non-trivial data structures. You can only use the ones that are already in the standard library.
For instance, how would you represent a binary tree? What would the type of a node be? How would I write an "insert node" function, which requires that the newly-created node continues to exist after the function returns?
I'm not necessarily saying that this makes your language bad, but it seems to me that the scope of things that can be implemented is much much smaller than C++.
Because the color of the sky is determined by a shifting mixture of wavelengths, not a single shifting wavelength.
Basically, the scattering process that "remove" blue from the spectrum also removes green, albeit to a lesser extent. There are some greenish and yellowish wavelengths in the sunset sky, but they're dominated by red, so the overall color appears red or orange.
In order for the sky to look noticeably green, there would have to be something that scattered reds and blues, without significantly absorbing green.
If you try to interpolate between sky-blue and orange using graphics software, the result depends on what "color space" you're using. If your software interpolates based on hue, you might see green (or purple) in the middle. But that's not physically realistic.
A realistic model is to interpolate each wavelength of the continuous spectrum separately. Interpolating in RGB color space is a crude approximation to this. And if you try the experiment, you'll see that the midpoint between sky-blue and orange is a kind of muddy brown, not green.
The most interesting part, IMO, is the "SRAM with EEPROM backup" chip. It allows you to persistently save the clock hands' positions every time they're moved, without burning through the limited write endurance of a plain old EEPROM. And it costs less than $1 in single quantities. That's a useful product to know about.
So the way this works seems to be this: It's an SRAM and an EEPROM in one little package along with a controller that talks with each, with a little capacitor (this clock uses 4.7uf) placed nearby.
The SRAM part does all of the normal SRAM stuff: It doesn't wear out from reading/writing, and as long as it has power it retains the data it holds.
The EEPROM does all the normal EEPROM stuff: It stores data forever (on the timescale of an individual human, anyway), but has somewhat-limited write cycles.
The controller: When it detects a low voltage, it goes "oh shit!" and immediately dumps the contents of the SRAM into EEPROM. This saves on EEPROM write cycles: If there are no power events, the EEPROM is never written at all.
Meanwhile, the capacitor: It provides the power for the chip to perform this EEPROM write when an "oh shit!" event occurs.
When power comes back, the EEPROM's data is copied back to SRAM.
---
Downsides? This 47L04 only holds 4 kilobits. Upsides? For hobbyist projects and limited production runs, spending $1 to solve a problem is ~nothing. :)
Has anyone found the chip on AliExpress? I only get unrelated listings with that part number, but this is a pretty interesting chip I'd like to get a few of.
An alternative would be a supercapacitor and a voltage divider connected to the ADC pin of the microcontroller. When the 5V rail dies, the supercapacitor can hold 3.3V for a few seconds while you write everything to the EEPROM.
It's as if people have never had shipping itemized before.
The only reason aliexpress shopping is cheap is because the rest of the world foots the bill. Unless somebody has finally removed China's "Developing Country" status thats gotten them essentially free international parcel service for the best part of 100 years.
Yeah OK, but if I only want 5 pieces and I have to choose between $5 or $30, I'm not going to think about the geopolitical situation, I'm just going to get the cheaper one.
I buy small parts with "Choice" shipping on AliExpress sometimes, because it's cheap and [usually] quick and they take care of all of that pesky tariff and customs business in ways that never have an opportunity to surprise me.
For years now, the shipping process has worked like this for me: They gather it up on their end and send the stuff on a cargo plane to a sort that is at or near JFK airport in New York.
If the order includes things from several different sellers, then at some point they generally get combined into one bag.
From there, they just mail it -- using regular, domestic USPS service. It shows up in my mailbox on my porch in Ohio a few days later.
Although it certainly was a thing I've experienced in the past, at no point does the process I've described exploit the "Developing County" loophole. They just send things to the other side of the world (at their expense), and then pay the post office the same way as anyone else does to bring it to my door.
EDIT: Oh lord, bad typo in my previous comment- it should have been aliexpress SHIPPING not Shopping.
It's not the same, what you described is Direct Entry (somewhere around page 25, linked below). Apparently the Terminal Dues system has been massively changed in the 5 years since I last looked- but it still appears unfavorable to USPS and US sellers, while favoring high volume foreign shippers.
As for how aliexpress delivers stuff, since the tarrifs: 1) no-name last mile. 2) USPS last mile, and USPS the entire way.
I don't know if any are associated with "Choice", Paid store shipping, and/or free store shipping.
Since I normally buy from aliexpress to avoid the insane 200-800% markups amazon/ebay/walmart/etc dropshippers demand the $5-$10 in shipping doesnt factor in.
As a consumer, here's how AliExpress Choice shipping functions for me: Like buying a widget from a shop downtown, the price is the price.
I don't see what anyone will pay (or has paid) for duties or tariffs or fees or delivery, I don't have any idea what the markup is at any level, and I don't know what GAO table they or anyone else used to get it to happen. That's outside of my purvey.
With this method: Same as with the shop downtown, I'm not importing anything myself; I don't see any customs forms or declarations at all. AliExpress handles all of that business, not me.
I can peek behind the curtain a bit and see some aspects of how things move from place to place as physical entities using the tracking data that they provide. And that's about it, until it eventually shows up inside of my mailbox -- and then I can have a nice gander at the labels and see that it was sent with USPS domestic postage.
This process doesn't (can't, AFAICT) abuse my nation's postal system, and I like that aspect quite a lot.
The downsides are cost and availability: There may be a dozen or more sellers offering seemingly-identical widgets on AliExpress, but maybe only one or two (if any) that ship that particular widget Choice. Like Prime, it can actually end up costing a bit more than other methods.
But it's fast, still cheap in absolute terms, and there's zero BS on my end so I like those parts, too.
"Hey, someone on the Internet used decent diction! Obviously, this means I must accuse them of being a bot!"
(Hey Dang. Can we get a ban button? There's a few people here that are impossible to conduct rational discourse with. My sanity would improve if they were simply gone from my view.)
Yes! The reflexive “must be LLM generated” is becoming ridiculous. Anything that includes proper punctuation and, god forbid, em dashes which I’ve used all my life must be suspect. The “it’s not x, it’s y” construction predates LLMs. I don’t recall ever sending a text without making sure it contained no errors, and yes, many have included infrequently used vocabulary.
I've been trying to write properly, clearly, and with the most expressive words I can come up with for many decades. I try to punctuate well, and to use functional formatting that I hope helps to effectively convey whatever it is that I'm on about. I try to improve as time goes on.
And I do this because if I'm going to bother with writing something for others to read, then I want my intended meaning to be easily-understood.
But increasingly, the instances where I manage to not screw any of that up too terribly result in a snarky and insulting retort in return.
And that kind of response is just not useful to anyone. I mean: What would people presume to have me do, instead? Become less-literate? Die in a fire? (Worse?)
It’s frustrating to the point that I have considered inserting grammatical errors, but that would go against my principles, which I have attempted to inculcate in my children. Yes, a significant amount of what’s posted is copied and pasted AI slop. But what in the world preceded this? Barely legible slop? I would much rather have someone craft their thoughts, run them through their preferred model, and write something coherent that is not marred by punctuation or basic elementary grammar errors. And you know what, the hell with the AI slop police. Yes, if we choose to use em dashes, we will.
An extra UI element or two should be enough. Maybe with sticky options for collapse-by-default or hide-by-default at the top of each HN comment section.
And the list of usernames can be stored and edited in the purveyor's HN bio (in plain text, like a monster), so that it works automatically across devices.
Upvoted because this stinks to high hell of an LLM response. Half the GPs comments seem to be in a similar vein. It’s such a shame but you can’t fight the trolls so don’t take it to heart.
Not quite - the chip the article refers to is the 47L04 [0], which is "just" NVSRAM built out of a RAM + EEPROM. I do agree on FeRAM being cool, though - I have a few I2C chips en route, and I can't wait to get my hands on them.
You could also consider MRAM. Which is available in larger sizes - up to 4 Mbit on SPI bus in the MR20H40, and 128 Mbit in EM128LXQ (but it gets unreasonably expensive when this big).
FRAM is extremely neat on paper, combining SRAM ish speeds with non-volatility, but adoption seems to be low. Possibly due to scaling issues. I've had a FRAM-based TI MSP430 in my random parts drawer for about a decade.
Particularly I like that I can get those large enough to stick a ring buffer from debug out on them as well and get crash logs from embedded systems despite the debug uart not being tethered to a dev machine.
Meh. The room-temperature endurance of modern EEPROMs (e.g., ST M95256) is something like 4 million cycles. If you use a simple ring buffer (reset on overflow, otherwise just appending values), you only need to overwrite a cell once every 32k ticks, which gives you a theoretical run time of 250,000 years with every-minute updates or 4,100 years with every-second updates.
The part of the article about the 158,000x slowdown doesn't really make sense to me.
It says that a nested query does a large number of iterations through the SQLite bytecode evaluator. And it claims that each iteration is 4x slower, with an additional 2-3x penalty from "cache pressure". (There seems to be no explanation of where those numbers came from. Given that the blog post is largely AI-generated, I don't know whether I can trust them not to be hallucinated.)
But making each iteration 12x slower should only make the whole program 12x slower, not 158,000x slower.
Such a huge slowdown strongly suggests that CCC's generated code is doing something asymptotically slower than GCC's generated code, which in turn suggests a miscompilation.
I notice that the test script doesn't seem to perform any kind of correctness testing on the compiled code, other than not crashing. I would find this much more interesting if it tried to run SQLite's extensive test suite.
Note that this article's summary has a significant error compared to the original press release[1]. The article says "90% range", whereas the press release says "90% capacity retention".
This is a big difference because there are all kinds of other factors besides energy capacity that can affect the efficiency of the whole system, and therefore affect range.
Most notably, air is about 28% denser at -40°C than at 25°C, so drag is about 28% higher. So you would expect roughly 28% less range at high speeds even if the battery has no capacity loss whatsoever.
As someone else mentioned, climate control also consumes a lot more power when it has to maintain a larger temperature difference between inside and outside.
> Most notably, air is about 28% denser at -40°C than at 25°C, so drag is about 28% higher. So you would expect roughly 28% less range at high speeds even if the battery has no capacity loss whatsoever.
With my gas car, I haven't noticed 30% worse fuel consumption at –30°C compared to +30°C [0]. To be fair, I haven't closely measured the fuel consumption at different temperatures, but I probably would have noticed such a big difference. This is just anecdotal of course, so your values may actually be correct.
[0]: It does occasionally get down to –40°C here, but my car won't usually start then, so I've slightly shifted your temperature range to the values where I've driven most.
It won't be as noticeable on a gas car because it is probably starting out around 30% efficiency (as compared with ~90% for an EV). This is a major advantage of gasoline, in a sense, because it means we have already engineered the package to account for a lot of wasted fuel.
Ah, so then the air temperature should reduce fuel consumption by 30%×30%=10%, which does seem to roughly match my experience. Thanks for pointing that out!
Internal combustion engines are actually more efficient in cold weather than hot weather. But the other factors like drag outweigh the increased efficiency of the engine. And since gas engines are so inefficient to begin with you don't notice much of a difference. https://physics.stackexchange.com/questions/270072/heated-an...
Gas cars produce more power at lower temperatures - more oxygen gets into the combustion chamber, and the engine also can run more advanced spark timing without as much worry of detonation. This is why turbochargers have intercoolers.
Note that a 28% increase in drag results in a roughly 22% decrease in range, because 1/1.28 ~= 0.78. Also there are other losses (like rolling friction and constant loads like headlights or cabin heat), so range doesn't scale perfectly with drag. Drag is the main source of loss at highway speed, however
I drive long distance weekly on my gas car. Full tank in summer (+20C) gives me 520 km, while in winter (-20C here) I get 430-440 km. I noticed it on my current and previous cars. Maybe it's thicker oil and worse car efficiencies in winter ? And that's despite that full tank of gas has more gas in winter comparing to summer, gasoline is denser in cold temps.
It's the majority, but overwhelming or not surprisingly appears to depend on car model, at least per some calculations someone on reddit ran [1].
I'd add though that rolling resistance tends to be higher, on average, in winter too. When there's often a bit of snow on the roads... Less so on high speed highways admittedly.
For most cars driving through air, at sea level, on planet Earth, at normal speed, the drag force F is proportional to the square of the speed (v^2).
That's not exponential because the speed (v) is not in the exponent. In fact, it's quadratic.
Corollaries: The power required to push the car at speed v will be proportional to Fv ~ v^3. The gas spent over time t ~ energy spent ~ power time ~ v^3 * time.
Define ‘high speeds’. There’s a reason race cars look like they do, to the point of having serious problems driving at speeds just a bit below highway speed limit.
Well, I think the paper answers that too. These problems are intended as a tool for honest researchers to use for exploring the capabilities of current AI models, in a reasonably fair way. They're specifically not intended as a rigorous benchmark to be treated adversarially.
Of course a math expert could solve the problems themselves and lie by saying that an AI model did it. In the same way, somebody with enough money could secretly film a movie and then claim that it was made by AI. That's outside the scope of what this paper is trying to address.
The point is not to score models based on how many of the problems they can solve. The point is to look at the models' responses and see how good they are at tackling the problem. And that's why the authors say that ideally, people solving these problems with AI would post complete chat transcripts (or the equivalent) so that readers can assess how much of the intellectual contribution actually came from AI.
I don't mean this as a knock on you, but your comment is a bit funny to me because it has very little to do with "modern" databases.
What you're describing would probably have been equally possible with Postgres from 20 years ago, running on an average desktop PC from 20 years ago. (Or maybe even with SQLite from 20 years ago, for that matter.)
Don't get me wrong, Postgres has gotten a lot better since 2006. But most of the improvements have been in terms of more advanced query functionality, or optimizations for those advanced queries, or administration/operational features (e.g. replication, backups, security).
The article actually points out a number of things only added after 2006, such as full-text search, JSONB, etc. Twenty years ago your full-text search option is just LIKE '%keyword%'. And it would be both slower than less effective than real full-text search. It clearly wasn’t “sub-100ms queries for virtually anything you want” like GP said.
And 20 years ago people were making the exact same kinds of comments and everyone had the same reaction: yeah, MySQL has been putting numbers up like that for a decade.
If you read the discussion, they weren't kept because of their encyclopedic value, or because they were "widespread". I'm not sure why the parent commenter said that.
They were kept to preserve a record of their having been uploaded, and to not create a legal risk for third parties who might be relying on the Commons page as their way to provide attribution.
The original proposal was to keep the image pages with the metadata, but delete the image files. That turned out to have some technical hurdles, so instead the images were overwritten with versions containing big ugly attribution messages, to discourage their use.
reply