"But the economy" is an out-of-date framing. The cost of renewables has been plummeting for well over a decade. New renewables are now cheaper than new fossil fuel plants in most of the world, and in many regions they're already competitive with or cheaper than simply running existing fossil fuel infrastructure. As modern wars in Ukraine and now Iran are increasingly demonstrating, they are not only cost effective but rapidly a matter of energy sovereignty and national security.
That's not to say we won't need treaties and supranational entities for some aspects of decarbonization. Methane emissions outside of agriculture are notably a problem of enforcement.
We're badly in need of a collective update to our priors regarding renewables. In the US, a hostile policy toward renewables is not only shooting ourselves in the foot environmentally, we are now actively impoverishing ourselves due to entrenched economic interests across the fossil fuel industry and the cultural inertia they actively worked to develop.
A new US administration and Congress need to be voted in. There is one party who backs fossil fuel interests and denies anthropogenic climate change. They're currently in charge. The American public didn't see that as an important enough issue in 2024.
They need a complete reworking of the government. The fact is that bum-fuck states with a handful of citizens can use their senate seats to hold the country hostage. Nothing will ever get better until that is resolved.
Gas turbines can run on a variety of fuels, natural, synthetic or a mixture of both. It’s actually one of the reasons that a turbine was chosen for the M1.
Except for the industries where it does matter. Trivializing the needs of complex and energy hungry supply chains, is bad faith. They are one of the many reasons fossil fuels are so widely used.
It's not really bad faith when we could make enormous progress in an enormous number of industries, and this in no way stops any of that progress in those economies.
It's specifically bad faith to say it as if it does somehow matter in the grand conversation, when the actual fallout is extremely small. Pretty much nobody is saying we must remove 100 PER CENT OF ALL FOSSIL FUEL USAGE EVERYWHERE FOREVER, just that we need to move off it.
If we stop using fossil fuels for the >90% of usage where fossil fuels are easy to replace, it'll make it much easier & cheaper for the <10% where it's difficult.
Maybe this is a naive question, but why wouldn't there be market for this even for frontier models? If Anthropic wanted to burn Opus 4.6 into a chip, wouldn't there theoretically be a price point where this would lower inference costs for them?
Because we don't know if this would scale well to high-quality frontier models. If you need to manufacture dedicated hardware for each new model, that adds a lot of expense and causes a lot of e-waste once the next model releases. In contrast, even this current iteration seems like it would be fantastic for low-grade LLM work.
For example, searching a database of tens of millions of text files. Very little "intelligence" is required, but cost and speed are very important. If you want to know something specific on Wikipedia but don't want to figure out which article to search for, you can just have an LLM read the entire English Wikipedia (7,140,211 articles) and compile a report. Doing that would be prohibitively expensive and glacially slow with standard LLM providers, but Taalas could probably do it in a few minutes or even seconds, and it would probably be pretty cheap.
The demo was so fast it highlighted a UX component of LLMs I hadn’t considered before: there’s such a thing as too fast, at least in the chatbot context. The demo answered with a page of text so fast I had to scroll up every time to see where it started. It completely broke the illusion of conversation where I can usually interrupt if we’re headed in the wrong direction. At least in some contexts, it may become useful to artificially slow down the delivery of output or somehow tune it to the reader’s speed based on how quickly they reply. TTS probably does this naturally, but for text based interactions, still a thing to think about.
I tend to agree, this has been my experience with LLM-powered coding, especially more recently with the advent of new harnesses around context management and planning. I’ve been building software for over ten years so I feel comfortable looking under the hood, but it’s been less of that lately and more talking with users and trying to understand and effectively shape the experience, which I guess means I’m being pushed toward product work.
I spent some very enjoyable time browsing courses and tutorials in the Santa Fe Institute’s complexity explorer![1]
I wish I had encountered complexity science earlier in life. It touches on so many of the questions that have sparked my imagination over the years, I’m so pleased to find such an accessible introduction.
The ending of the article left me feeling he had more of an axe to grind here. The mostly unspoken ideological background is that classical art is often appropriated by proponents of Western chauvinism to demonstrate their supposed innate cultural superiority. Poorly painted reconstructions undermine that image, but it does not mean this was done intentionally. I agree that a more neutral observer would have been interested in learning the thought process of those researchers.
> Poorly painted reconstructions undermine that image, but it does not mean this was done intentionally
If I'm understanding you right, you're suggesting the author thinks that researchers are intentionally doing poor constructions to undermine public perception of classical art as part of some sort of culture war? I don't see anything in the article to suggest this
> The enormous public interest generated by garish reconstructions is surely because of and not in spite of their ugliness. It is hard to believe that this is entirely accidental. One possibility is that the reconstructors are engaged in a kind of trolling.
It's towards the end of the article. He doesn't directly mention culture war stuff but he does talk about it being "iconoclastic." I think it's a reasonable interpretation of what he was saying.
I don't think it's reasonable. If there's context I'm missing and this guy has written about culture war stuff before, fair enough, but based on this article alone, I'm not seeing any indication of that.
That phrase suggests more that the author believes this is done for spectacle, knowing that it will attract attention to the researcher far more than a nice-looking painted statue would. Basically he seems to be accusing these researchers of doing flame-bait for clicks, like those kitchen-top meal TikTok videos designed to get engagement by making people angry.
Maybe my brain is oversaturated with culture war nonsense from too much doomscrolling but that’s where my train of thought went too, even if it wasn’t directly implied.
By claiming our ancient predecessors had terrible taste you can make them look like primitive fools, and make our own modernity appear superior in comparison.
When boiled down to culture war brainrot the poor coloring in the reconstructions becomes a woke statement that the brutish patriarchal empires of antiquity have nothing to teach our sophisticated modern selves and that new is good and old is bad. A progressive hit-piece on muh heritage.
Anything you don’t like is a purple haired marxist if you squint hard enough.
Idk why my brain went there. I’m guessing the years of daily exposure to engagement-farming ragebait had something to do with it.
Interesting. Like many people here, I've thought a great deal about what it means for LLMs to be trained on the whole available corpus of written text, but real world conversation is a kind of dark matter of language as far as LLMs are concerned, isn't it? I imagine there is plenty of transcription in training data, but the total amount of language use in real conversational surely far exceeds any available written output and is qualitatively different in character.
This also makes me curious to what degree this phenomenon manifests when interacting with LLMs in languages other than English? Which languages have less tendency toward sycophantic confidence? More? Or does it exist at a layer abstracted from the particular language?
That's not to say we won't need treaties and supranational entities for some aspects of decarbonization. Methane emissions outside of agriculture are notably a problem of enforcement.
We're badly in need of a collective update to our priors regarding renewables. In the US, a hostile policy toward renewables is not only shooting ourselves in the foot environmentally, we are now actively impoverishing ourselves due to entrenched economic interests across the fossil fuel industry and the cultural inertia they actively worked to develop.