Hacker Newsnew | past | comments | ask | show | jobs | submit | hambes's commentslogin

it is difficult to comprehend for me that soneone spends all this time thinking through and calculating how to harness as much energy as possible and then wants to use it for large language models instead of something useful, like food production, communication, transport or any other way of satisfying actual human material needs. what weird priorities.

Whether you like it or not, we are burning a lot of electricity on datacenters. That is a fact. And energy consumption is likely going to significantly increase in the near future. If we can reduce that energy usage, that is a good thing and a big improvement.

I do not think I even understand your complaint. Different people can work on different problems. We do not have to pick only one.

> My improvement is more important than yours.

We can just do both.


We don’t do both. We spend trillions on AI.

Reducing consumption is just a case of using A) smaller models and B) not shoving AI into everything, e.g. ads, search results, email summaries

LLMs and other IT applications have the distinct advantage that they require no other raw materials as input, aside from initial setup, extension, and maintenance. Under these conditions the requirements essentially boil down to real estate and high bandwidth internet connections. Also, demand for AI is currently so high that the solution can be scaled up far enough to be viable.

All the other concerns require more subtle approaches because human requirements are much more messy.


demand for AI is not high, which is the current problem of the industry and the reason that AI companies are trying to shoehorn their technology into products everywhere.

these companies and the author of the article are trying to increase capacity for something that barely anyone wants in the software they use, which makes it all the more wasteful.


I agree, the author seems wildly optimistic that all that capacity will indeed be needed in the long run. But I personally hope that it will lead to a breakthrough of solar and battery storage even if demand for AI tanks. If that happens, one could still shunt all that solar energy to other places, either with alternators and overland lines, or by shipping charged grid scale batteries via train.

Tell that to the 1000-watt space heater in the corner that i tasked with upscaling some old home movies! Four GPUs worked very hard all night to get footage of my first dog up to 1080p. My living room is a little warm this morning.

Well, I've never seen anything written by AI evangelists that doesn't sound like it was written in day three of an adderall binge. This essay is no different.

Sometimes (often) solving the problem is the most fun part, regardless of how it’s used.

The scale of AI energy consumption is quite unique from what I heard, and there’s a lot of money flowing into that direction. So that seems to me a decent reason to think about that.

I haven’t heard yet that food production is constrained by these kind of things.

It appears to make that you’re just taking a cheap jab at AI.


Exactly this, you need a (big) problem to motivate people to actually take a serious jab at a (big) new idea

I don’t think that’s a great description of what’s going on here. I think there are two things:

1. The actual thing the authors spend a lot of time thinking about seems to be more generally how to make good use of solar power for things that people find valuable – synthetic fuels desalination, etc – and the implications of the sun only shining some of the time – maybe you don’t want to pay more for more efficient systems as then you want steady power which is more expensive.

2. I think the blog post is a bit of a response to lots of public discussion about AI data centres. IMO seems better to see what someone who thinks a lot about energy has to say than eg, a government suggestion that you delete old pictures to reduce water consumption.


I share the reaction but I'm also aware how easy is it to inventivize (aka subsidize) ineffective old processes in the name of "productive" priorities. The problem is not LLM/DC, the problem is food production, transport and communications are not sexy in a "post-scarcity" (entitled/distracted) societies. People take too many things for granted

the saying goes something like: the brightest minds in the world are getting together to figure out how to deliver more ads

>instead of something useful, like food production, communication, transport or any other way of satisfying actual human material needs. what weird priorities.

You realize that even pre-AI, that this complaint would still hold for most of tech? Adtech, enterprise SaaS, and B2C apps are hardly "actual human material needs". Even excluding tech, the next lucrative sector would be banking, and same complaint would be applicable. In other words, this is a decades (centuries?) old complaint, repackaged for the current thing.


yes, i do realize that. thank you for expanding on my point.

if anything we are producing too much food

and what communications you find lacking?


Food distribution is still a problem in vast part of the world.

Handling food waste is another issue.

Climate related shortage are coming soon for us (at the moment they only manifest as punctual price hikes - mustard a few years ago, coffee and chocolate more recently, etc...

https://www.euronews.com/green/2025/02/13/goodbye-gouda-and-...

https://www.fao.org/newsroom/detail/adverse-climatic-conditi...

https://www.forbes.com/sites/noelfletcher/2024/11/03/how-cli...

I don't know if the electricity going into compute centers could be put to better use, to help alleviate climate change impacts, or to create more resilient and distributed supply chains, etc...

But I would not say that this is "not a problem", or that it's completely obvious that allocating those resources instead to improving chatbots is smart.

I understand why we allocate resource to improving chatbots - first world consumers are using them, and the stock markets assume this usage is soon going to be monetized. So it's not that different from "using electricity to build radios / movie theater / TVs / 3D gaming cards, etc... instead of desalinating water / pulling CO2 out of the air / transporting beans, etc...

But at least Nvidia did not have the "toupet" to claim that using electricity to play Quake in higher res would solve world hunger, as some people claim:

https://www.forbes.com/sites/johnwerner/2024/05/03/sam-altma...


It feels like you didn’t read your own link as he somewhat addressed your concern directly. The idea is simply that AI investment is an “up front cost” to future improvements. To debate against it you would have to provably explain why you think AI will not advance other technologies whatsoever.

I usually don't try to prove things won't happen. I leave the burden of proof to the salesmen. In this case, they have extraordinary claims, so as the saying goes, I wait for extraordinary proofs.

So far they have failed to convince me.


the main bottleneck for the civilization in communications currently is the sparsity of cynical, negative HN comments

nerds favorite pastime is to go “um actually ”

I've been doing a similar thing using GhostSCAD[1], which is a relatively thin wrapper around OpenSCAD in Go. Not as typesafe, but my language of choice.

[1]: https://github.com/ljanyst/ghostscad


Note that a while back Python support was added to a soft-fork of OpenSCAD:

https://pythonscad.org/


is that first sentence entirely broken or am i having a temporary lapse in cognition?


I went down a short bunny trail trying to figure it out.

https://en.wikipedia.org/wiki/What_Is_It_Like_to_Be_a_Bat%3F

> Nagel asserts that "an organism has conscious mental states if and only if there is something that it is like to be that organism—something it is like for the organism."


Nah, it says imagine that there is nothing that exists that is similar to being chatgpt.

That is, that chatgpt cannot _be_ because if it could there would in fact _be_ something that is like _being_ chatgpt.

Imagine we could prove that there is nothing it is like to be ChatGPT

You could rephrase it as "Imagine that we could prove that there is no existence equal to the existence of chatgpt"


> The analogical form of the English expression "what it is like" is misleading. It does not mean "what (in our experience) it resembles," but rather "how it is for the subject himself."

Nagel "What is it like to be a bat?"


I think it is not broken, it’s just worded in a way that feels broken. But it sure does look weird.


To add to the very short "valodating the result" section, let me recommend `git range-diff`.

Range diff takes two commit ranges and compares thor commits pairwise, wich is perfect for rebases, since after the rebase all commits still exist and should be mostly identical, just at some other place in the history.

Use it like `git range-diff main..origin/mybranch main..mybranch` to compare the local, rebased branch with the upstream branch.

This let's you easily verify that eitger mothing changed or that any conflicts were resolved well.


sure, let's build more energy sources with finite fuel supply and negative environmental impact while there's better options available <.<


Why would I need claude code for remote programming, if I could just use ssh and tmux?


As someone who is not deep into linux desktop history: Can you please elaborate on the missing accessibility features in wayland or direct me to resources on that?

I've been using wayland for a while now and am very happy with it, but my accessibility needs are pretty basic.



thank you!


Maybe, but that is a different issue.

The use of generative AI for art is being rightfully criticised because it steals from artists. Generative AI for source code learns from developers - who mostly publish their source with licenses that allow this.

The quality suffers in both cases and I would personally criticise generative AI in source code as well, but the ethical argument is only against profiting from artists' work eithout their consent.


> rightfully criticised because it steals from artists. Generative AI for source code learns from developers

The double standard here is too much. Notice how one is stealing while the other is learning from? How are diffusion models not "learning from all the previous art"? It's literally the same concept. The art generated is not a 1-1 copy in any way.


IMO, this is key to the issue, learning != stealing. I think it should be acceptable for AI to learn and produce, but not to learn and copy. If end assets infringe on copyright, that should be dealt with the same whether human- or AI-produced. The quality of the results is another issue.


> I think it should be acceptable for AI to learn and produce, but not to learn and copy.

Ok but that's just a training issue then. Have model A be trained on human input. Have model A generate synthetic training data for model B. Ensure the prompts used to train B are not part of A's training data. Voila, model B has learned to produce rather than copy.

Many state of the art LLMs are trained in such a two-step way since they are very sensitive to low-quality training data.


> The art generated is not a 1-1 copy in any way.

Yeah right. AI art models can and have been used to basically copy any artist’s style many ways that make the original actual artist’s hard work and effort in honing their craft irrelevant.

Who profits? Some tech company.

Who loses? The artists who now have to compete with an impossibly cheap copy of their own work.

This is theft at a massive scale. We are forcing countless artists whose work was stolen from them to compete with a model trained on their art without their consent and are paying them NOTHING for it. Just because it is impressive doesn’t make it ok.

Shame on any tech person who is okay with this.


Copying a style isn’t theft, full stop. You can’t copyright style. As an individual, you wouldn’t be liable for producing a work of art that is similar in style to someone else’s, and there is an enormous number of artists today whose livelihood would be in jeopardy if that was the case.

Concerns about the livelihood of artists or the accumulation of wealth by large tech megacorporations are valid but aren’t rooted in AI. They are rooted in capitalism. Fighting against AI as a technology is foolish. It won’t work, and even if you had a magic wand to make it disappear, the underlying problem remains.


It's almost like some of these people have never seen artists work before. Taping up photos and cutouts of things that inspire them before starting on a project. This is especially true of concept artists who are trying to do unique things while sticking to a particular theme. It's like going to Etsy for ideas for projects you want to work on. It's not cheating. It's inspiration.


It's a double standard because it's apples and oranges.

Code is an abstract way of soldering cables in the correct way so the machine does a thing.

Art eludes definition while asking questions about what it means to be human.


I love that in these discussions every piece of art is always high art and some comment on the human condition, never just grunt-work filler, or some crappy display ad.

Code can be artisanal and beautiful, or it can be plumbing. The same is true for art assets.


Exactly! Europa Universalis is a work of art, and I couldn't care less if the horse that you can get as one of your rulers is aigen or not. The art is in the fact that you can get a horse as your ruler.


In this case it's this amazing texture of newspapers on a pole: https://rl.bloat.cat/preview/pre/bn8bzvzd80ye1.jpeg?width=16... Definitely some high art there.


I agree, computer graphics and art were sloppified, copied and corporate way before AI, so pulling a casablanca "I'm shocked, shocked to find that AI is going on in here!" is just hypocritical and quite annoying.


Yeah this was probably for like a stone texture or something. It "eludes definition while asking questions about what it means to be human".


That's a fun framing. Let me try using it to define art.

Art is an abstract way of manipulating aesthetics so that the person feels or thinks a thing.

Doesn't sound very elusive nor wrong to me, while remaining remarkably similar to your coding definition.

> while asking questions about what it means to be human

I'd argue that's more Philosophy's territory. Art only really goes there to the extent coding does with creativity, which is to say

> the machine does a thing

to the extent a programmer has to first invent this thing. It's a bit like saying my body is a machine that exists to consume water and expel piss. It's not wrong, just you know, proportions and timing.

This isn't to say I classify coding and art as the same thing either. I think one can even say that it is because art speaks to the person while code speaks to the machine, that people are so much more uppity about it. Doesn't really hit the same as the way you framed this though, does it?


Are you telling me that, for example, rock texture used in a wall is "asking questions about what it means to be human"?

If some creator with intentionality uses an AI generated rock texture in a scene where dialogue, events, characters and angles interact to tell a story, the work does not ask questions about what it means to be human anymore because the rock texture was not made by him?

And in the same vein, all code is soldering cables so the machine does a thing? Intentionality of game mechanics represented in code, the technical bits to adhere or work around technical constraints, none of it matters?

Your argument was so bad that it made me reflexively defend Gen AI, a technology that for multiple reasons I think is extremely damaging. Bad rationale is still bad rationale though.


The images clair obscur generated hardly "eludes definition while asking questions about what it means to be human.".

The game is art according to that definition while the individual assets in it are not.


> Art eludes definition while asking questions about what it means to be human.

All art? Those CDs full of clip art from the 90's? The stock assets in Unity? The icons on your computer screen? The designs on your wrapping paper? Some art surely does "[elude] definition while asking questions about what it means to be human", and some is the same uninspired filler that humans have been producing ever since the first the first teenagers realized they could draw penis graffiti. And everything else is somewhere in between.


You're just someone who can't see the beauty of an elegant algorithm.


Speak for yourself.

I consider some code I write art.


The obfuscated C competition is definitely art


I really don't agree with this argument because copying and learning are so distinct. If I write in a famous author's style style and try to pass my work off as theirs, everyone agrees that's unethical. But if I just read a lot of their work and get a sense of what works and doesn't in fiction, then use that learning to write fiction in the same genre, everyone agrees that my learning from a better author is fair game. Pretty sure that's the case even if my work cuts into their sales despite being inferior.

The argument seems to be that it's different when the learner is a machine rather than a human, and I can sort of see the 'if everyone did it' argument for making that distinction. But even if we take for granted that a human should be allowed to learn from prior art and a machine shouldn't, this just guarantees an arms race for machines better impersonating humans, and that also ends in a terrible place if everyone does it.

If there's an aspect I haven't considered here I'd certainly welcome some food for thought. I am getting seriously exasperated at the ratio of pathos to logos and ethos on this subject and would really welcome seeing some appeals to logic or ethics, even if they disagree with my position.


> Generative AI for source code learns from developers - who mostly publish their source with licenses that allow this.

I always believed GPL allowed LLM training, but only if the counterparty fulfills its conditions: attribution (even if not for every output, at least as part of the training set) and virality (the resulting weights and inference/training code should be released freely under GPL, or maybe even the outputs). I have not seen any AI company take any steps to fulfill these conditions to legally use my work.

The profiteering alone would be a sufficient harm, but it's the replacement rhetoric that adds insult to injury.


This cuts to the bone of it tbqh. One large wing of the upset over gen AI is the _unconsenting, unlicensed, uncredited, and uncompensated_ use of assets to make "you can't steal a style" a newly false statement.

There are artists who would (and have) happily consented, licensed, and been compensated and credited for training. If that's what LLM trainers had led with when they went commercial, if anything a sector of the creative industry would've at least considered it. But companies led with mass training for profit without giving back until they were caught being sloppy (in the previous usage of "slop").


No, the only difference is that image generators are a much fuller replacement for "artists" than for programmers currently. The use of quotation marks was not meant to be derogatory, I sure many of them are good artists, but what they were mostly commissioned for was not art - it was backgrounds for websites, headers for TOS updates, illustrations for ads... There was a lot more money in this type of work the same way as there is a lot more money in writing react sites, or scripts to integrate active directory logins in to some ancient inventory management system than in developing new elegant algorithms.

But code is complicated, and hallucinations lead to bugs and security vulnerabilities so it's prudent to have programmers check it before submitting to production. An image is an image. It may not be as nice as a human drawn one, but for most cases it doesn't matter anyway.

The AI "stole" or "learned" in both cases. It's just that one side is feeling a lot more financial hardship as the result.


Finally a good point in this thread.

There is a problem with negative incentives, I think. The more generative AI is used and relied upon to create images (to limit the argument to inage generation), the less incentive there is for humans go put in the effort to learn how to create images themselves.

But generative AI is a deadend. It can only generate things based on what already exists, remixing its training data. It cannot come up with anything truly new.

I think this may be the only piece of technology humans created that halts human progress instead of being something that facilitates further progress. A dead end.


I feel like these exact same arguments were made with regard to tools like Photoshop and Dreamweaver. It turns out we can still build websites and artists can still do artist things. Lowering the bar for entry allows a TON of people to participate in things that they couldn't before, but I haven't seen that it kills curiosity in the folks who are naturally curious about things. Those folks will still be around taking things apart to see how they work.


> Generative AI for source code learns from developers - who mostly publish their source with licenses that allow this.

As far as I'm concerned, not at all. FOSS code that I have written is not intended to enrich LLM companies and make developers of closed source competition more effective. The legal situation is not clear yet.


To me, if the AI is trained on GPLv3/AGPL code, any code it generate should be GPLv3/AGPL too, the licence seems clear imho.


FOSS code is the backbone of many closed source for-profit companies. The license allows you to use FOSS tools and Linux, for instance, to build fully proprietary software.


Well, if its GPL you are supposed to provide the source code to any binaries you ship. So if you fed GPL code into your model, the output of it should be also considered GPL licensed, with all implications.


Sure, that usage is allowed by the license. The license does not allow copying the code (edit: into your closed-source product). LLMs are somewhere in between.


"Mostly" is doing some heavy lifting there. Even if you don't see a problem with reams of copyleft code being ingested, you're not seeing the connection? Trusting the companies that happily pirated as many books as they could pull from Anna's Archive and as much art as they could slurp from DeviantArt, pixiv, and imageboards? The GP had the insight that this doesn't get called out when it's hidden, but that's the whole point. Laundering of other people's work at such a scale that it feels inevitable or impossible to stop is the tacit goal of the AI industry. We don't need to trip over ourselves glorifying the 'business model' of rampant illegality in the name of monopoly before regulations can catch up.


I'm not sure how valid it is to view artwork differently than source code for this purpose.

1. There is tons of public domain or similarly licensed artwork to learn from, so there's no reason a generative AI for art needs to have been trained on disallowed content anymore than a code generating one.

2. I have no doubt that there exist both source code AIs that have been trained on code that had licenses disallowing such use and art AIs have that been trained only on art that allows such use. So, it feels flawed to just assume that AI code generation is in the clear and AI art is in the wrong.


> The use of generative AI for art is being rightfully criticised because it steals from artists. Generative AI for source code learns from developers - who mostly publish their source with licenses that allow this.

This reasoning is invalid. If AI is doing nothing but simply "learning from" like a human, then there is no "stealing from artists" either. A person is allowed to learn from copyright content and create works that draw from that learning. So if the AI is also just learning from things, then it is not stealing from artists.

On the other hand if you claim that it is not just learning but creating derivative works based on the art (thereby "stealing" from them), then you can't say that it is not creating derivative works of the code it ingests either. And many open source licenses do not allow distribution of derivative works without condition.


Everyone in this thread keeps treating human learning and art the same as clearly automated statistical processes with massive tech backing.

Analogy: the common area had grass for grazing which local animals could freely use. Therefore, it's no problem that megacorp has come along and created a massive machine which cuts down all the trees and grass which they then sell to local farmers. After all, those resources were free, the end product is the same, and their machine is "grazing" just like the animals. Clearly animals graze, and their new "gazelle 3000" should have the same rights to the common grazing area -- regardless of what happens to the other animals.


I'm not sure why you are replying to me. I made no such treatment of them.

The analogy isn't really helpful either. It's trivially obvious that they are different things without the analogy, and the details of how they are different are far too complex for it to help with.


Isn't this expected of late stage capitalism?


Isn't what to be expected? And define late stage capitalism.


Most OS licenses requires attribution, so AI for code generation violates licenses the same way AI for image generation does. If one is illegal or unethical, then the other would be too.


I've always thought it was weird how artists are somehow separate and special in the creation process. Sometimes to the point of getting royalties per copy sold which is basically unheard of for your meager code monkey.


Is there a OSS licence that excludes LLM?


I'm not sure about licenses that explicitly forbid LLM use -- although you could always modify a license to require this! -- but GPL licensed projects require that you also make the software you create open source.

I'm not sure that LLMs respect that restriction (since they generally don't attibute their code).

I'm not even really sure if that clause would apply to LLM generated code, though I'd imagine that it should.


Very likely no license can restrict it, since learning is not covered under copyright. Even if you could restrict it, you couldn't add a "no LLMs" clause without violating the free software principles or the OSI definition, since you cannot discriminate in your license.


"Learning" is what humans can do. LLMs can't do that.


“Learning” as a concept is too ill defined to use as a distinction. What is learning? How is what a human does different from what an LLM does?

In the end it doesn’t matter. Here “learning” means observing an existing work and using it to produce something that is not a copy.


They don't require it if you don't include OSS artifacts/code in your shipped product. You can use gcc to build closed source software.


> You can use gcc to build closed source software

Note that this tends to require specific license exemptions. In particular, GCC links various pieces of functionality into your program that would normally trigger the GPL to apply to the whole program, and for this reason, those components had to be placed under the "GCC Runtime Library Exception"[1]

[1]: https://www.gnu.org/licenses/gcc-exception-3.1.html


those that require attribution

so... all of them


> The quality suffers in both cases

According to your omnivision?


That's awesome. I like the explicit nature of go and usually the verbosity is worth the benefits. But finding ways to improve upon it without losing the explicitness is great.


Because when Ferret7446 says "neutral", they mean "anything that doesn't harm them". Centrism is a lie.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: