That sounds right, but it can be superbly wrong because that presupposes that you can debug what the AI gets very confidently wrong.
There are three legs to the stool: specification, implementation, and verification. Implementation and verification both take low-level knowledge and sophisticated knowledge of how things break.
Indeed, even if were possible for someone to create any program most of the time just by directing a team of AI agents, when something does not work one needs the ability to zoom in through the abstraction levels and understand exactly the program that is executed, so only knowing to generate prompts becomes insufficient.
This is the same with compilers. Most of the time a programmer needs to know only the high-level language that is used for writing the program. Nevertheless, when there is a subtle bug or just the desired performance cannot be reached, a programmer who also understands the machine language of the processor has a great advantage by being able to solve the bug or the performance problem, which without such knowledge would be solved in much more time or never.
I don't think compilers are a good example. The economics of software development has won a long time ago. For example in Gamedev with well known soft real-time requirements people (mostly) stopped doing that machine code dance many hardware generations ago. Like it happened with memory optimizations: people measure memory in GB now not in KB =)
I am sure programmers cherish every case when they can do micro optimization but in the retrospect the high level cuts is what made the system fit the perf or memory budget.
Gamedev dev is a good example actually. True, handwritten assembly has gone out of style. But knowing how caches work, and how to lay out data to improve performance is important. And stuff like vector intrinsics also gets used.
1) luckily, nowadays compiler's bugs surface very rarely, as the average programmer does not have capability to solve such issues
2) unfortunately, LLM's, by their very nature (not having a model of what they do, are prone to introducing subtle bugs, i.e. it is like programming in high-level language whose compiler likes to wing it
There is also a fair bit of demographics at play. Many of the people writing these little applications grew up and imprinted before open source was much of a thing.
By definition, a document that is written is historic, not prehistoric.
Prehistoric information could be preserved by an oral tradition, until it is recorded in some documents (like the Oral Histories at the Computer History Museum site).
Actually, this AI Compute is not very useful for physics, protein folding or many other high performance computing.
The problem is that the connectivity required for much of AI is very different than that required for classic HPC (more emphasis on bandwidth, less on super low latency small payload remote memory operations) and the numeric emphasis is very different (lots of mixed resolution and lots of ridiculously small numeric resolutions like fp8 vs almost all fp64 with some fp32).
The result is that essentially no AI computers reach the high end of the TOP500.
The converse is also true, classic frontier scale super computers don't make the most cost effective AI training platforms because they spend a lot of the budget on making HPC programs fast.
There are three legs to the stool: specification, implementation, and verification. Implementation and verification both take low-level knowledge and sophisticated knowledge of how things break.
reply