For my money, while surely it must have been jarring, that experience would seem to say that on-device LLMs are more important programming tools than package repositories.
As another commenter said, the affordability of LLM subscriptions (or, as others are predicting, the lack thereof) is the primary concern, not the technology itself stealing away your skills.
I am far from the definitive voice in the does-AI-use-corrupt-your-thinking conversation, and I don't want to be. I don't want LLMs to replace my thinking as much as the next person, but I also don't want to shun anything useful that can be gained from these tools.
All that said, I do feel that perhaps "dumber" LLMs that work on-device first will allow us to get further and be better, more reliable tools overall.
Man, I completely agree with your thinking here. I've been trying to be more active in online communities, to try to discuss this exact idea.
LLM code can be leveraged, but pretending that tokens are just going to turn into money printers at some point is not productive. The primary source of software's value to an end user is the thought that was placed into it. Where does that go for the AI-natives? As you say, they are seemingly brute forcing software engineering, at least so far.
One thing I have been considering is how LLMs primarily change the "build vs buy" calculus for a fair number of software niches, particularly things like developer tooling and small libraries and packages. Partially due to a projected increase in supply chain attacks, and partially due to the changing standards of engineers. There's no longer anything stopping someone from working with an ugly or clunky syntax, presuming it's a well documented standard. So many "developer experience" tools are going to hit this - Tailwind primarily comes to mind.
It's a sort of "erosion" of niches in the current landscape - although to me this does not really work out for the worse in the long term, since again, the thinking in the process will need to just go somewhere else.
I feel the future includes the sentiments you describe. It was a little before my time professionally, but I grew up reading that kind of thinking.
I do think that the open web stuff, decentralized, or at least more decentralized than currently, is the path forward. I've been reading about the AT protocol and it recently becoming an official working group with the IETF.
I feel a second order effect of making decentralized social networking easier, is making individuals more empowered to separate from what they don't believe in. The third order effect is then building separate infrastructure entirely.
As sad as that can be - in my personal opinion it runs the risk of ending the "world wide" part of the web - it appears to be the only way society can avoid enriching the few beyond reason.
Well, redistributing their money is (in some cases disingenuously) exactly how they are able to pitch investors. "Sure, value my company at $10B and my shares make me $2B, but we're alllllll gonna make money when hit AGI!!!" That kind of thing.
Sure, I understand why the people around them who benefit from it also want to do that.
My point is that it all only benefits a few people. Those people used to call themselves "kings", appointed by god. Now they are tech oligarchs. If the people realised that it was bad to have kings, eventually maybe they will realise that it is bad to have oligarchs?
ChatGPT 3.5 came out coming on 4 years ago now. I don't think a human generation (~20-30 years) needs to be the benchmark here, but new juniors in the industry for a handful of years can be said to be a whole "generation". That how I was reading OP.
Pretty sure the idea predates that lecture, it appears in Charles Stross' novel Accelerando from 2005 (which is based on short stories that were published years earlier).
That video is describing the generic concept of building a Dyson sphere from Mercury but lacks a proper account of waste heat removal and energy. It also lacks a specific timeline.
No, turning it around doesn't work because it's cause and effect.
"Present the facts and all the people will cheer and support stop using coal" -> correct
"If not it means stopping using coal is a bad idea?" -> incorrect, because people are against switching to renewables because of lies, not because of facts
> incorrect, because people are against switching to renewables because of lies, not because of facts
Do you really think that if you presented the truth to a MAGA follower that believes climate change is a communist scam that they will just see reason and support your position just because you presented them with the facts?
If, despite best attempts for me to communicate facts to them within our common language, they still can not recognize basic truth or fact from their subjective truth? I don't really see that a problem for the speaker, but a problem for the listener.
That's no way that the opposite expectation can scale as a society. Otherwise we would all need to validate everyone else's words constantly with primary source documents. There must be some level of trust in order to communicate efficiently, surely.
As another commenter said, the affordability of LLM subscriptions (or, as others are predicting, the lack thereof) is the primary concern, not the technology itself stealing away your skills.
I am far from the definitive voice in the does-AI-use-corrupt-your-thinking conversation, and I don't want to be. I don't want LLMs to replace my thinking as much as the next person, but I also don't want to shun anything useful that can be gained from these tools.
All that said, I do feel that perhaps "dumber" LLMs that work on-device first will allow us to get further and be better, more reliable tools overall.
reply