> I suspect we'll see more 'open spec' software, with actual source generated on-demand (or near to it) by models. Then all the security and governance will happen at the model layer.
So each time you roll the dice you gamble on getting a fresh set of 0-days? I don't get why anyone would want this.
You already do this with human-authored code, just slowly.
Project model capabilities out a few years. Even if you only assume linear improvement at some point your risk-adjusted outcome lines cross each other and this becomes the preferred way of authoring code - code nobody but you ever sees.
Most enterprises already HATE adopting open source. They only do it because the economic benefit of free reuse has traditionally outweighed the risks.
If you need a parallel: we already do this today for JIT compilers. Everything is just getting pushed down a layer.
Next, you double click the Excel icon on the desktop, and instead of having Excel installed or a spec of Excel, you have a cloud service with thirty years of Usenet, Quora, StackOverflow, Reddit, PHPBB comments and blog tutorials about how people use Excel, and you wait a few moments while approximately-Excel is rederived from these experiences.
You’ll accept the delay because by then it happens faster than Microsoft can make a splashscreen and window open from a local nvme drive. And because you can customise Excel’s feature set by simply posting a Reddit comment where you hallucinate using a feature that Excel doesn’t have and waiting a couple of days.
[although it can be difficult to find the real Reddit to post on as your web browser will tend to synthesise the experience of visiting any website using a cloud AI model of every website without connecting to the real one at all. This was widely loved as a security measure and since most websites are AI written content on AI written codebases, makes less difference than you’d first think]
It's a mistake to confuse what you're seeing out of today's models with what you'll see out of future ones. We're barely out of the gate on this stuff. We'll borrow what works, and use it to bootstrap something better.
> You already do this with human-authored code, just slowly.
No I don't. I build predictable and deterministic pipelines. If I rebuild from a specific git sha, I expect the same output. If I get something different, I need to fix what's causing that.
Nothing precludes you from doing that with AI-gen code vs human-gen code. What you just described is downstream.
If you have a human authoring code, you re-roll every time they release a new version. AI just releases versions faster, and in response to different, faster-moving inputs.
So each time you roll the dice you gamble on getting a fresh set of 0-days? I don't get why anyone would want this.