Nobody in their right mind builds a pipeline where security relies on a custom container runtime catching things after the fact. Security starts in CI at the image build stage. If your flow actually lets a vulnerable Next.js build slip all the way through to deployment in Containarium, your integration process is fundamentally broken, not your runtime environment
I agree CI should catch as much as possible — image scanning and dependency checks at build time are table stakes.
But in practice, CI is only a point-in-time guarantee. A build can pass all checks and still become vulnerable later as new CVEs are disclosed.
So the goal isn’t to rely on runtime to “catch mistakes”, but to add a second layer of defense — continuous monitoring and probing for already-deployed services.
If anything, this incident showed us that CI alone isn’t sufficient once systems are long-lived.
Reviewing generated code actually takes a higher skill level than writing it. A junior who prompted this Next.js app into existence is physically incapable of auditing the security of those imports. And for a senior it's often cheaper to just write it from scratch than to sit there and audit abstract spaghetti generated by Claude
That's very true, but there's room for some nuance. Because of author inexperience, I wouldn't expect audits and reviews to be comprehensive, but I would expect the questioning to take place. In the case of imports, it doesn't take years of experience to verify that the versions added are latest stable and to generally check out release notes / issues. It's far from enough, but it's something.
Also agreed on the cheaper-to-write bit. Trying to redeem piles of slop into something workable is a fool's errand.
I think both points are true in practice. Reviewing AI-generated code can require more experience than generating it, but at the same time some basic checks (dependency versions, release notes, etc.) are still worth doing.
One thing this incident reminded us of is that review is only a snapshot in time. Even if everything looks fine when a PR is merged, new CVEs can appear later and suddenly make previously safe dependencies vulnerable.
That’s why we started treating monitoring and vulnerability checks as part of the platform itself, not just the review process.
Whether the quality of the code is the responsibility of the submitter or not is kind of irrelevant though, because the cost of verifying that quality still falls on the maintainer. If every submitter could be trusted to do their due diligence then this cost would be less, but unfortunately they can't; it's human nature to take every possible shortcut.
The real invariant is responsibility: if you submit a patch, you own it. You should understand it, be able to defend the design choices, and maintain it if needed
Ownership and responsibility are useless when a YouTuber tells it to their million followers that GitHub contributions are valued by companies and this is how you can create a pull request with AI in three minutes, and you get hundred low value noise PRs opened by university students from the other side of the globe. It’s Hacktoberfest on steroids.
"You committed it, you own it" can't even be enforced effectively at large companies, given employee turnover and changes in team priorities and recorgs. It's hard to see how this could be done effectively in open source projects. Once the code is in there, end users will rely on it. Other code will rely on it. If the original author goes radio silent it still can't be ripped out.
reply