Since security exploits can now be found by spending tokens, open source is MORE valuable because open source libraries can share that auditing budget while closed source software has to find all the exploits themselves in private.
> If Mythos continues to find exploits so long as you keep throwing money at it, security is reduced to a brutally simple equation: to harden a system you need to spend more tokens discovering exploits than attackers will spend exploiting them.
An app like Cal.com can be vibe coded in a few evenings with a Chrome MCP server pointed to their website to figure out all the nooks and crannys. The moat of Cal.com is not the code, it's the users who don't want to migrate.
The real answer is they are likely having a hard time converting people to paid plans
Exactly, that's why most Saas companies are in a very tough position.
You have to bring value that goes beyond the source code and hosting, otherwise your clients are going to vibe code a custom solution instead of paying you.
> otherwise your clients are going to vibe code a custom solution instead of paying you.
How many things do you want to be responsible for? How many vibe coded projects do you want to maintain?
I think this line of reasoning is overblown. Just because you can doesn't mean a significant number of people will. I think the 3D printer comparison is apt.
Same story as always, writing the code in the easy part. Requirement gathering, analysis, consensus, direction, those are all the hard parts. Enterprises have a business to run and don’t want to run a software shop on top of everything else.
The story is usually that businesses don't want to commit to indefinitely expending their limited efforts maintaining software which isn't part of the company's core competencies. Most of the cost and effort of software happens after the first release is delivered.
> Enterprises have a business to run and don’t want to run a software shop on top of everything else.
It sounds like you mostly understand here. The biggest part of "running a software shop" they want to avoid is responsibility for support, bugs, fires, ongoing maintenance, and legal issues, of post-release software.
Dave's Pizza around the corner doesn't make a social media app, not because Dave can't figure it out, not because he can't vibe code one, not because he can't contract someone to do it, but because running a social media site isn't a core competency of Dave's Pizza. Instead, Dave uses existing social media sites, and focuses his efforts and passions on making pizza.
So I work in enterprise tech. consulting, my current project is with a large, global, chemicals company (it wouldn't be right to call out my client by name). This client is extremely competent from their multiple enterprise architects down to their analysts, they're a pleasure to work with. One of the business requirements could be met by a very simple in-house developed and hosted API, it's a perfect use case for GenAI assisted coding too. There's no magic, it's a problem solved over and over already. However, they don't want to touch inhouse dev with a 10 foot pole for the reasons we're both talking about. They don't want to support it, extend it, back it up, monitor it, and all the other things that have to happen after the code is done. They're perfectly happy to buy licenses from a saas so if anything goes wrong they can tell the CTO "it's not me, it's them". And when the CTO says "why doesn't it do this too!?!" they can say "i'll call our rep and ask".
saas value to an enterprise is more than just the functionality provided and I think that is lost on a lot of the heads down software devs here.
They are, and always have. Looking over "software engineer" roles in my local area, I see folks at companies in a variety of industries: finance, health, logistics, health care, and the local power utility, all well outside the software industry.
Most enterprise companies don't develop everything in house, but usually do have a varied mix of in-house infrastructure, IaaS and PaaS solutions, and SaaS products. Large organizations across varied industries often have multiple internal dev teams, and the availability of increasingly sophisticated AI tools is going to enable the same teams to be effective at more, and more complex, projects. AI will definitely start shifting make-or-buy decisions, especially for mature, commodity use cases, to 'make'.
I don't think it's much cheaper. Writing some code to do some CRUD has always been easy. Getting to a proof of concept is definitely quicker. But creating something that can be relied upon in production? That's as difficult and time consuming as it has ever been.
Yup. I've explained it as okay, some software is free as in beer and others are free as in speech. DIY software is free as in yacht.
It sounds nice, but now you have something that takes an enormous amount of time and effort to use and maintain, plus you need to have someone with the skills to run it.
They won’t, because specialization is a key aspect of capitalism.
This is why companies outsource anything. Google, Inc. is big enough to own farms and ranches to grow the food eaten in its cafeterias. They could make trucks to transport that food. They could operate factories to make cutlery, etc. Why do they instead choose to pay layers of margins to layers of middlemen?
Absurd example? How about Apple? They outsource production of their chips, instead of capturing the margin they are currently gifting to their partners. Why?
Delta Airlines doesn’t operate oil fields or even refineries even though a major cost of their operations is jet fuel. Why?
Once you can reason through these very simple examples, you will understand why enterprises are unlikely to walk away from SaaS.
s/Delta/United/ or s/Delta/Southwest/ or s/Delta/Lufthansa/. Or if you prefer, s/refinery/oilfield, or s/refinery/pipeline. Or even s/refinery/farm/ because Delta also buys food in vast quantities (I would not be surprised to find they have interests in ag producers that offset a small % of their food purchases, which does not diminish the argument).
Delta also does not make airplanes, jet engines, seats, radios, GPS, glass, or even wires. They don't distill the spirits they serve on their flights. They don't own and operate a satellite Internet capability. They don't even make movies for in-flight entertainment.
The point is that Delta, like most successful firms, outsources key aspects of core service delivery.
The second article you linked says plainly that the refinery is an offset/hedge. QED Delta still outsources the vast majority of its fuel costs. (They could, for example, own large swathes of the Permian and do E&P as well. They choose to leave that to others.)
Vertical integration has been a common practice in industry for 150 years.
Yes, very few firms fully control their upstream supply chains, but very few conversely produce nothing but their core market offering in-house. Most companies are somewhere in between, doing some things in-house, and obtaining other things from vendors.
Most large firms have in-house software dev teams responsible for at least some portion of their development work. I know software engineers locally working, variously, at banks, pet supply distributors, power companies, soft drink bottlers, and many other non-tech industries. And AI can and will extend these teams' capacity to internally manager larger segments of their companies' tech stacks.
> How many vibe coded projects do you want to maintain?
here comes the next SaaS idea - vibe coded services as a service. You tell what service you want, may be point out a couple examples, and you get that service vibe coded and hosted for you for a small monthly fee!
I agree with the other poster that mention this is likely a publicity stunt but all it's really showing is that VC is still incredibly stupid with their money. All the more reason to seize it from them then properly fund useful software and not subsidize vanity projects for stanford grads.
About the friction, not the capabilities...I haven't switched off my biz calendar/appointment provider I'm paying for even though I've kinda outgrown it.
Email is actually a excellent example of something with network dependence. Changing email providers requires that you change your email address too (unless you own and use your own domain). An address change causes friction from having to update the network of contacts and services which used your old email address.
For many use cases, maintenance doesn't matter. At this point, using LLMs to one-shot a tool/service for a single use or time-limited use case is becoming more appealing than signing up with some vendor, even for free.
May be trying creating one and see how much effort and time is required to clone such a functionality to a proper working state! Something for personal use can be created in about 5-10 days, but even then the skill that is required and the amount of tokens to burn, hosting and security etc, will easily kill. This is exactly the thought process of many, but it will surely kill many opensource contributors. I've stopped committing anything to any open source repos as a personal choice. I do not want to train a LLM which will eventually create more slop and headaches since for me, time is the only important factor which holds the maximum value! Nothing else!
At risk of self promotion, I think more people should adopt something like the Ship of Theseus license (https://github.com/tilework-tech/nori-skillsets/pull/465/cha...). It's not obvious if this will patch the clean room hole in licensing, but I'd rather see it play out in court than assume opensource is just fully dead
I am incredibly skeptical that license is legally meaningful. (but obligatory IANAL.)
Generally speaking it is very very difficult to have a license redefine legal terms. Either this theseus copy is legally a derivative work or it isn't, and text of a license is going to do at most very very little to change that.
> It's not obvious if this will patch the clean room hole in licensing, but I'd rather see it play out in court than assume opensource is just fully dead
IANAL, but I don't think there is any "clean room hole in licensing": licensing is downstream of copyright law, and clean-room reverse engineering, if done properly, results in products that do not infringe the copyright of the originating work to begin with, so the license therefore never applies to them.
The "Ship of Theseus" license you've linked to attempts to define for itself what constitutes a derivative work, but what is and is not a derivative work is determined by copyright law itself, and there's no concept of imposing licensing conditions on works that your copyright never extended to in the first place.
Simply put, if something isn't infringing your copyright under the criteria established by the law, then your permission was never needed to do it in the first place, so the conditions under which you would or would not be willing offer that permission are irrelevant.
If someone spends years using your software and they have learned a mental model of how your software works, they can build an exact replica and there is nothing you can do about that since there is no copy you can sue over. Said user is also allowed to use AI tools to aid in the process.
What you want is an EULA, which is a contract users explicitly have to agree with. A license file only grants access or the right to copy, it doesn't affect usage of your software.
"AI slop is rapidly destroying the WWW, most of the content is becoming more and more low-quality and difficult to tell if its true or hallucinated. Pre-AI web content is now more like the golden-standard in terms of correctness, browsing the Internet Archive is much better. This will only cause content to go behind pay-walls, allot of open-source projects will be closed source not only because of the increased work maintainers have to do to not only review but also audit patches for potential AI hallucinations but also because their work is being used to train LLMs and re-licensed to proprietary."
Replace AI with "open source and Linux", and "open source" with "Windows" in the statements. That's what Microsoft's PR team would have said about open source and Linux about 20 years back in the 2000s.
After the unsuccessful FUD era, now Microsoft is running away with Linux by running its Windows alongside via WSL to combat MacOS Unix-like popularity, and due to Linux and open source dominance in the cloud OS demographic.
Even worse, in that Microsoft's FUD was mostly right. The joke about Open Source being communism played out straight - FOSS pretty much destroyed the ability to make money on software products, accelerating transition to SaaS models where you can carefully seek rent from the shelter of your secure company servers (later, cloud), and that is in large part responsible for modern surveillance economy - as it turns out, some SaaS segments decayed to "free with ads", where - much like with OSS and locally-run software - you cannot compete on price with free.
This conclusion makes more sense to me, but maybe I'm too naive.
The media momentum of this threat really came with Mythos, which was like 2 or 3 weeks ago? That seems like a fairly short time to pivot your core principles like that. It sounds to me like they wanted to do this for other business related reasons, but now found an excuse they can sell to the public.
It also means that you need to extract enough value to cover the cost of said tokens, or reduce the economic benefit of finding exploits.
Reducing economic benefit largely comes down to reducing distribution (breadth) and reducing system privilege (depth).
One way to reduce distribution is to, raise the price.
Another is to make a worse product.
Naturally, less valuable software is not a desirable outcome. So either you reduce the cost of keeping open (by making closed), or increase the price to cover the cost of keeping open (which, again, also decreases distribution).
The economics of software are going to massively reconfigure in the coming years, open source most of all.
I suspect we'll see more 'open spec' software, with actual source generated on-demand (or near to it) by models. Then all the security and governance will happen at the model layer.
> I suspect we'll see more 'open spec' software, with actual source generated on-demand (or near to it) by models. Then all the security and governance will happen at the model layer.
So each time you roll the dice you gamble on getting a fresh set of 0-days? I don't get why anyone would want this.
You already do this with human-authored code, just slowly.
Project model capabilities out a few years. Even if you only assume linear improvement at some point your risk-adjusted outcome lines cross each other and this becomes the preferred way of authoring code - code nobody but you ever sees.
Most enterprises already HATE adopting open source. They only do it because the economic benefit of free reuse has traditionally outweighed the risks.
If you need a parallel: we already do this today for JIT compilers. Everything is just getting pushed down a layer.
Next, you double click the Excel icon on the desktop, and instead of having Excel installed or a spec of Excel, you have a cloud service with thirty years of Usenet, Quora, StackOverflow, Reddit, PHPBB comments and blog tutorials about how people use Excel, and you wait a few moments while approximately-Excel is rederived from these experiences.
You’ll accept the delay because by then it happens faster than Microsoft can make a splashscreen and window open from a local nvme drive. And because you can customise Excel’s feature set by simply posting a Reddit comment where you hallucinate using a feature that Excel doesn’t have and waiting a couple of days.
[although it can be difficult to find the real Reddit to post on as your web browser will tend to synthesise the experience of visiting any website using a cloud AI model of every website without connecting to the real one at all. This was widely loved as a security measure and since most websites are AI written content on AI written codebases, makes less difference than you’d first think]
It's a mistake to confuse what you're seeing out of today's models with what you'll see out of future ones. We're barely out of the gate on this stuff. We'll borrow what works, and use it to bootstrap something better.
> You already do this with human-authored code, just slowly.
No I don't. I build predictable and deterministic pipelines. If I rebuild from a specific git sha, I expect the same output. If I get something different, I need to fix what's causing that.
Nothing precludes you from doing that with AI-gen code vs human-gen code. What you just described is downstream.
If you have a human authoring code, you re-roll every time they release a new version. AI just releases versions faster, and in response to different, faster-moving inputs.
> to harden a system you need to spend more tokens discovering exploits than attackers will spend exploiting them.
That can't be right, can it? Given stable software, the relative attack surface keeps shrinking. Mythos does not produce exploits. Should be defenders advantage, token wise, no?
A slight factor differentiating security systems here is involved to the advantage of defenders: Attackers have to find a whole exploit chain, while defenders only need to fix one part of it.
The point is that, as the defender, you only have to find each hole once, while the attacker can spend an infinite amount of tokens trying to find more holes, that are increasingly harder to find and might, eventually, not exist at all. The defender can do that too, of course, but being in the defense, there is value in not being able to uncover new holes (your system keeps working, ostensibly) while as the attacker that's simply how you fail.
So long as that OSS keeps accumulating features, there isn't quite the equilibrium you're imagining. If you can pin to a stable version, which continues to audited, you're fine. But if the rest of the world moves on to newer versions of the software, you'll have to as well, unless you want to own the burden of hardening older versions.
It does take a lot of work not to break things. That's why "move fast and break things" are traditionally closely coupled: it's hard to avoid breaking things without slowing down.
But why would responsible AI users -- actual engineers using it to accelerate grunt work, not vibe coders -- not use the AI tooling to increase their capacity to do all of the work it takes to avoid breaking things while still moving fast, relatively speaking?
Testing a new incremental feature against the entire extant codebase, not just the bits of it that they had the bandwidth to tackle within the deadline, seems like exactly the sort of thing well-disciplined engineering teams would use AI to do.
For one, you architect your codebase into separate layers and logical chunks that are self-contained and can be reasoned about independently. That's not always possible, but you draw as many firm boundaries as you can. You don't ever want to be in the position where you have to test an entire codebase against your new change. That's a horrible nightmare scenario.
So you don't "test as much of the codebase as you have time for", you write tests for your code and the interface between it and other systems. Maybe integration or FE tests depending on what you have.
So testing against a whole codebase is rarely the problem, and if it is, you have bigger issues.
Also, LLMs don't make mistakes like humans do. They fuck up in weird unpredictable ways that mean you kinda have to treat them like a hostile adversary trying to sneak in subtle backdoors. It slows things down.
Also, actually writing code is usually the fast and easy part. It's all the other bits -- getting the requirements, building mockups, planning, review, standing up new infra etc etc etc. LLMs can't help with most of that.
> For one, you architect your codebase into separate layers and logical chunks that are self-contained and can be reasoned about independently. That's not always possible, but you draw as many firm boundaries as you can. You don't ever want to be in the position where you have to test an entire codebase against your new change. That's a horrible nightmare scenario.
Right, all of this goes without saying.
> So testing against a whole codebase is rarely the problem, and if it is, you have bigger issues.
Read "whole codebase" as a qualitative descriptor, not a quantitative one, where the codebase as deployed is the reference point for the overall business logic flow, including where different processes interact with or block each other.
The point is to use the AI tools to ensure that new features and functionality are implemented in a way that is consistent with the technical and business constraints that emerge from the entire tech stack, precisely so that adding something new in context A doesn't break functionality in context B.
> Also, actually writing code is usually the fast and easy part. It's all the other bits -- getting the requirements, building mockups, planning, review, standing up new infra etc etc etc. LLMs can't help with most of that.
Yes, that's true, but those are the perennial challenges in doing solution design before you even get to implementation. The "break things" problem happens when solutions for context-bound problems are being implemented in ways that lead out of their context -- narrowly-focused goals are often pursued in ways that create externalizes elsewhere in the organization precisely because of the limited focus available to the people working on solving for those goals.
The point here is that there's now the possibility for the AI tools to make much broader contextual awareness available as part of the solution design and implementation phase of every narrowly-targeted goal. If the AI has a reasonably accurate model of how systems and solutions affect each other in the broader organization, it can offer predictions of how any proposed new feature might impact all of the other business functions, and do so in near real time, and ultimately head off 90% of "break things" impacts that you'd otherwise need multiple series of meetings, testing sessions, buy-ins and sign-offs to avoid otherwise. That's what would get you moving fast with much less collateral damage.
This seems similar to the lesson learned for cryptographic libraries where open source libraries vetted by experts become the most trusted.
Your average open source library isn’t going to get that scrutiny, though. It seems like it will result in consolidation around a few popular libraries in each category?
An important difference between SaaS offerings and open source libraries is that the latter have not liability. They can much more easily afford exhibiting vulnerabilities until those are fixed.
> > If Mythos continues to find exploits so long as you keep throwing money at it, security is reduced to a brutally simple equation: to harden a system you need to spend more tokens discovering exploits than attackers will spend exploiting them.
But this has always been the reality of security: it's always been fundamentally an economic question about which party has stronger incentives and greater resources than the other. The increasing sophistication of AI is available to both parties equally, so I don't see how AI in itself fundamentally changes the equation.
I like that LLMs have basically switched to the weapons business model. Buy our LLM so that the bad guy we'll sell our LLM to doesnt destroy your code. As a bonus, we'll give you a little head start. And if you're a small company that can't afford our LLM, too bad.
I wonder if we could find a way to donate unused tokens or even local compute resources to open-source projects we support. Especially for security auditing where it could probably be somewhat more asynchronous and disconnected than the open-source developers' personal tool choices.
"unused tokens" are the force driving token cost down. If everyone used all of the tokens they thought they were paying for, prices would explode. People with subscriptions that don't get out everything they can are subsidizing the system.
There are ways to use LLM service providers that leave no tokens unused, by just billing per token. Unsurprisingly, this quickly becomes much more expensive than subscriptions.
With current GPU prices, I find it difficult to find hardware to run competent models. gemma4's 26B MoE model seems to offer the best performance per megabyte of RAM, but it's not good enough to use the way one would use cloud models.
The big, impressive models all scale well for multi-customer setups because of the efficiency batching provides, but the base cost to run models like that as even a small business is incredibly high. If you can't saturate your LLM hardware almost 24/7, the time to earn back your investment is high unless you choose inferior models that are worse at their job.
I think sometimes about this, does it really make sense? Financially I mean. The is just my impressions and I'm glad to be corrected if someone has hard numbers and some experience going this route:
At the moment LLMs vendors are in market grab mode and take a loss on big subscription users, they are starting to try to move to profit but they must move carefully to not let a competitor steal their users so we will still have "cheap" tokens for a while.
Even if prices go up by a bit, they have the scale in their favor to optimize costs.
If commercial model providers go into "not competitive" territory with their prices compared to open models, wouldn't it always be cheaper to use an open models inference provider? They can take advantage of scale as well, and with no model moat, competition should keep prices honest.
And last ressort, renting GPU time in the cloud seem like a safer bet than buying a GPU to me?
“Unused tokens” are a weird, fragile concept that I wouldn’t want to build upon. You can just donate money, you know. That’s what money’s for - it’s the universal exchange thingy.
> to harden a system you need to spend more tokens discovering exploits than attackers will spend exploiting them
This is true until certain point, unless the requirement / contract itself has loophole which the attacker can exploit it without limit. But I don't think this is the case.
Let's say, if someone found an loophole in sort() which can cause denial-of-service. The cause would be the implementation itself, not the contract of sorting. People + AI will figure it out and fix it eventually.
This may be true long term but not short term. It also assumes that white hats will be as motivated as black hats – not true.
For projects with NO WARRANTY, the risk is minimal, so yes there are upsides.
For a commercial project like cal.com, where a breach means massive liability, they don’t have the resources to risk breaches in the short term for potentially better software in the long term.
> It's been a common wisdom now for decades that open source is more secure.
This is not true.
The problem rather is that the managers of many companies don't allow their programmers to apply their knowledge about security - the programmers should rather weed out new features.
I expect we're about to find that it's a lot easier to convince a company to spend money running an AI security scan of their dependencies and sharing the results with the maintainers than it is to have them give those maintainers money directly.
(I just hope they can learn to verify the exploits are valid before sharing them!)
> SAN FRANCISCO – March 17, 2026 – The Linux Foundation, the nonprofit organization enabling mass innovation through open source, today announced $12.5 million in total grants from Anthropic, AWS, GitHub, Google, Google DeepMind, Microsoft, and OpenAI to strengthen the security of the open source software ecosystem.
It’s almost cute how insignificantly small that amount is considering the companies named. Great for The Linux Foundation of course, but it still feels like they are being cheap as heck.
Since security exploits can now be found by spending tokens, open source is MORE valuable because open source libraries can share that auditing budget while closed source software has to find all the exploits themselves in private.
> If Mythos continues to find exploits so long as you keep throwing money at it, security is reduced to a brutally simple equation: to harden a system you need to spend more tokens discovering exploits than attackers will spend exploiting them.