Hacker Newsnew | past | comments | ask | show | jobs | submit | cheesecakegood's commentslogin

I guess a little bit of background is helpful, which most people don’t realize, about carpet production. A lot (a LOT) of carpet is some variant of polyester since it’s just a really nice durability + cost + stain resistance mix (although the latter is one of its big weaknesses comparatively, important to note). Thus there is significant demand for stain guard options.

Compounded on top of that is because although you can dye the polyester before it is extruded, this is the firm minority of carpet produced. Maybe a quarter or less? Usually you will dye it before spinning it into yarn or before tufting it (weaving it into the backing basically). However, it is often cheaper especially for solid carpet to essentially dunk the whole finished carpet into a vat for dyeing, or run it through a glorified conveyor belt to do much the same. I’d hazard a decent guess that the latter two methods are the worst offenders for pollution for obvious reasons. What’s a bit of a pity then is that it seems to me that this isn’t really inevitable (although if we stop entirely that’s a perfect recipe for some other country to undercut us even if they inherit the long term dangers too).

With that said, the CRI (Carpet and Rug Institute) is almost the epitome of what you’d imagine an industry group to be. And 3M/DuPont are, well, you know. So I’m sure the article doesn’t really exaggerate there.

Finally I’d say that the upshot for all of us should be that regulation and pressure work. There was a calculation made whether intentionally or not, explicitly or implicitly, that if this came out to be bad then the big chemical companies could be blamed (and could further be reliably counted on to have engaged in outright unethical activity that makes blaming them a somewhat valid strategy too).

Source: sold carpet for a few years and was actually curious about the industry unlike many of my peers


No one dunks finished carpet in dye. The dye house dyes the fiber, pre construction. https://youtu.be/FG2-af0xYOA

The treatments like stain master are/were the problem, not the dye.

Carpet is a wonderful product, even polys. It’s good we’re having a conversation about it, but vinyl flooring has far out passed carpet , and it is full of awful carcinogens, made in china, not disclosed, secret recipes. But , it can be unstable and release into your home / office / work place


That video is AI-generated

Actual memory, in my opinion. Right now memory is broadly speaking like a system of sticky notes the AI writes itself and checks every time, rather than an integrative system that allows learning and can trigger more flexibly.

It's extra interesting because I think the model people should be talking about is actually not DeepSeek V4 Pro, but the Flash version. When accounting for cache hits, the input price (per OpenRouter) is effectively only 6 cents per million tokens (3 vs 14 cents hit/miss), and 28 cents on output. That's really good efficiency, and it's not a sale price like they are doing with V4 Pro, it's the normal price.

It's actually pretty difficult to find a good comparison model because there isn't one. Again, a 14/28 cent in/out model, ignoring cache, it scores just below GPT 5.4 Mini-xhigh (75/450) and Gemini 3 Flash (50/300) in intelligence. It's similar to Gemma 4 31B in some metrics (13/38) including cost, so it's not completely unheard of, but it's pretty notable that virtually everything else in the same region in most benchmarks are going to cost at least 5 times more (much, much more in very output-heavy contexts)


It's well priced but does that have much relevance for "state of the art coding models", specifically?

I wouldn't use Gemini 3 Flash or GPT 5.4 mini for anything except the most trivial work, although both are useful for basic exploratory work.

So I'm using a heavy model for the bulk of the work and the cost of that so far outweighs the light model that the light model cost is effectively irrelevant.


It's so interesting to see the wild pendulum swings of LLM sentiment here.

If one likes a model then it's capable of one-shotting entire apps.

Otherwise it's "only suitable for the most trivial tasks".

Never in between.


You're confusing "different people with different opinions" with "wild pendulum swings".

Personally my opinion in this regard is highly consistent over time.


I have no trivial tasks.

Just last week, I was trying to map the weird and wonderful column names emitted by a NetScaler’s detailed REST log into OpenTelemetry semantics.

The NetScaler is basically abandoned by its dying vendor. Hence, its new features like sending logs directly to Splunk compatible receivers are basically undocumented. I’m sure there’s like three of us masochists out there stumbling our way through the brambles.

The Open Telemetry end is a mess of “deprecated” and “beta”, copying the shifting sands of other cloud native projects like Kubernetes.

Even with carefully curated Markdown documentation references and sample logs, every modern “frontier” AI makes basic mistakes and hallucinates like crazy no matter how I stuff their context.

This isn’t an Erdős problem! This is just getting logs from point A to point B.


(I would drop this somewhere more relevant but apparently replying to a thing from 17 days ago would be necroing here + PMs are hard or something)

>[1] With what prompt!? I like the terse output! Do share...

Not sure if it's complete or completely right, but the matter came up in a recent session, and when asked what gave, Jim 'n' I came up with some supposedly relevant factors, at least one of which is news to me (supposedly, steering LLMs using negative instructions isn't counterproductive anymore (not that I'd been resisting the temptation anyway)):

https://gemini.google.com/share/7af54a6861d7#:~:text=What%20...

With the caveat (or bonus) that it can go (?:too)? far when told to "be blunt" and not to "pull punches":

https://i.vgy.me/WHRZD7.png (from 2024-09) (in this case, in user-config persistent instructions in Kagi's multi-LLM thing)


It's so true. I bet 80% of questions normal people even ask chatgpt/copilot could be answered with an 8b model trained on recent data.

I don't think people realize how small the gap between free to cheap models have from frontier models. It's going to be commoditized a lot faster than marketing will catch up. Once cash gets tight or prices rise it's more or less done for.

Especially considering some of the small free/cheap models can one shot code now.


Is there a good cloud provider for these open models that I can easily swap my Claude library in python to?

No they do not (to be clear, not internal state, just the transcript). It’s entirely role-play. LLM apologies are meaningless because the models are mostly stateless. Every new response is a “what would a helpful assistant with XYZ prior context continue to say?”


If your "features" page doesn't actually contain even an iota of actual information, but does contain a copyrighted image from the Limitless TV show right next to a buy now button, you're not gonna make it. Sorry.


I mean, the OP was right, this is a totally Elon thing to do.


Except when I recently put XFCE on my old macbook air laptop as a trial run, within the first day I found it nearly impossible to do something so simple as add an application to the taskbar/dock. Something about AppPkg's not showing up by default in the taskbar adder? I finally figured it out, but no icon - just an invisible square. And guess what? If I decide the update the app, the whole thing breaks again.

I have a degree in a tech-related field. I do things on the command line on purpose every week. It should not be this hard even for me to so something so simple. It is not even remotely ready for regular joe end users.


Needs qualification. Research shows trial and error learning is very durable, but it’s not the most time efficient (in fact it’s relatively poor, usually, on that front). The two concepts are a bit different. Yes, trial and error engages more of the brain and provides a degree of difficulty that can sometimes be helpful in making the concepts sticky, but well designed teaching coupled with meaningful and appropriately difficult retrieval and practice is better on most axes. When possible… good teaching often needs refinement. And you’d be surprised how many educators know very little about the neuroscience of learning!


> And you’d be surprised how many educators know very little about the neuroscience of learning!

I'm (pleasantly) surprised every time I see evidence of one of them knowing anything about it.


At the university level in the US, few faculty get any kind of training before they are expected to start teaching. And the teaching requirement is more or less “do no harm.” If you’re at a research university, which includes many publicly funded universities, then your career trajectory is based almost exclusively on your research output. I could go on, but it suffices to say that it’s not surprising that the teaching could be better.

That said, most institutions have teacher training resources for faculty. I was fortunate to be able to work intensely with a mentor for a summer, and it improved my teaching dramatically. Still, teaching is hard. Students sometimes know—but often don’t know—what is best for their learning. It’s easy to conflate student satisfaction with teaching effectiveness. The former is definitely an important ingredient, but there’s a lot more to it, and a really effective teacher knows when to employ tools (eg quizzes) that students really do not like.

I am frequently amused by the thought that here we have a bunch of people who have paid tons of money, set aside a significant fraction of their time, and nominally want to learn a subject that they signed up for; and yet, they still won’t sit down and actually do the reading unless they are going to be quizzes on it.


> the thought that here we have a bunch of people who have paid tons of money, set aside a significant fraction of their time, and nominally want to learn a subject that they signed up for; and yet, they still won’t sit down and actually do the reading unless they are going to be quizzes on it.

How often have they put down the money, as opposed to their parents?

How often do they actually care about learning the subject, as opposed to be able to credibly represent (e.g. to employers) that they have learned the subject?

How often is the nominally set-aside time actually an inconvenience? (Generally, they would either be at leisure or at the kind of unskilled work their parents would be disappointed by, right?) My recollection of university is that there was hardly any actual obligation to spend the time on anything specific aside from exams and midterms, as long as you were figuring out some way or other to do well enough on those.


I suppose I should have said “nominally want to learn” etc, but I think you are right: most students simply want the credential. I maintain that this is still a strange attitude, since at some point, some employer is going to ask you to do some skilled work in exchange for money. If you can’t do the work, you are not worth the money, credentials be damned. On the other hand, I routinely see unqualified people making a hash out of things and nobody really seems to care. Maybe the trick is not to be noticably bad at your job. Still, this all strikes me as a bad way to live when learning and doing good work is both interesting and enjoyable.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: