See, this is just a weak argument. This was thousands of words of hypotheses backed up with data and citation. Dismissing it as "blathering" and then demanding I make a bet based on your terms isn't an argument, nor is it a particularly compelling idea - you haven't engaged with my work, nor my arguments, nor my actual ideas.
You are, on some level, suggesting that money is a more compelling argument than an actual argument, because that is your only response.
Exactly my point. I took great pains to not say "OpenAI will 100% die without fail," because doing so would be declarative in a way that would wall off my argument, no matter how well I researched and presented it.
Instead, I wanted to show people the terms under which OpenAI survives, and how onerous said terms were. It's deeply concerning - and I do not think that's a big thing to say! - how much money they may be burning, and how much money they will take to survive.
Hi! I'm afraid you've made an assumption that isn't true.
LLMs do not "bring the cost of writing software close to $0" on a number of levels.
1. The code is not 100% reliable, meaning it requires human oversight, and human beings cost money.
2. LLMs themselves are not cheap, nor profitable. I am comfortable humoring the idea that someone could run their own models - something which is beginning to happen - to write code. I think that's really cool, but I am also not sure how good said code will be or how practical doing so will be.
Right now, Microsoft is effectively subsidizing the cost of Github Copilot, though they appear to have produced quite a lot of revenue from it.
However, it seems that Github was not profitable before (https://news.ycombinator.com/item?id=17224136) and I would argue isn't profitable now. It's hard to tell, because Microsoft blends their costs into other business lines.
> Hi! I'm afraid you've made an assumption that isn't true.
Citation?
Your concerns are certainly valid, but the LLMs are getting smaller, faster, and cheaper to run every day. Now, I also agree that you still need someone "programming" -- in the sense that they're telling a computer what to do, but they no longer need to "code" in the traditional sense (curly braces and semicolons).
We're actively seeing non-engineers build useful software for themselves, just with a $20/month subscription to ChatGPT/Claude.
Times are changing, you no longer need a 6 figure engineer to build your one-off tool.
We’re saying the same thing, but from different angles.
LLMs won’t build Google from scratch, we’ll still need human software engineers.
LLMs will enable an entirely new class of ”programmer” to exist. Writing software still requires a technical mindset, but you no longer need a 4-year degree (or bootcamp) to build and deploy a personal web app.
The cost of writing software is approaching $0, as you will no longer need to hire a developer for your bespoke requirement — how valuable that software is depends entirely on the end user.
Programmers in the 70s thought C was silly; how could you trust an abstraction over actual assembly instructions?
We’re seeing the same shift here, but accelerated to a much larger group of potential “programmer” recruits.
Again, we’re seeing this pop up almost every day across Reddit[0], Discord, and various forums.
Non-developers are already writing software entirely with ChatGPT.
I realize that these are commonly-held tropes, but where is the actual article that says this? There's the famous "the internet isn't a big deal" thing in Newsweek (and funnily enough that piece is extremely prescient in many other ways!), but I don't know if I've seen the kind of hype-busting then.
But also...the reason they might have is that email kind of sucked back then. Of course you wouldn't see the promise in something that was clunky and slow and nobody used.
This isn't a comparable to LLMs, though, because even someone who found them clunky could see why you'd want to send an email versus sending a letter.
I think you are misunderstanding my point, and perhaps I should've worded it more-precisely"
"Mass market utility" here refers to its ability to sell at the scale it would need to substantiate its costs. As it stands, LLMs do not have mass-market utility at the scale that they need to substantiate their costs. It is really that simple. If they did, these companies would be profitable, and they would be having a meaningful effect on productivity, which they are not.
Thanks for reading!
I downplay it because I fundamentally disagree on the pace of innovation and the exponential increase in capabilities per dollar happening over time. I do not see the rapid acceleration - or at least, they are yet to substantively and publicly show it.
I also think it's a leap of logic to suggest that the former CTO of Stripe joining is somehow the fix they need, or proof they're going to accelerate.
Also, I fundamentally disagree - Stripe was an obvious business. Explaining what Stripe did wasn't difficult. The established ways of taking money were extremely clunky - perhaps there was RELUCTANCE to change, which is a totally fair thing to bring up, but that doesn't mean it wasn't obvious if you thought about it. What's so obvious about GPT? What's the magic trick here?
Anyway, again, thanks for reading, I know you don't necessarily agree, but you've given me a fair read.
Claiming Stripe was obvious is ahistorical unless you believe tens of thousands or even millions of entrepreneurs discarded $100B.
You have a small point that anyone who used authorize.net or similar wanted it to be better and that was obvious, but there's nearly infinite things people want to be better. I'd like breakfast, my commute, my car, my doctor, my vet, etc to be better. That you could make a better thing was incredibly non-obvious and that's why no one did.
1. I do not know of any article that said that Google was "DOA" or "done" as a result of the choice of a search engine as a business model. In fact, search engines were an already-established industry at the time. If I'm wrong, I'd love to read it, as I imagine it's a fascinating historical document - even if it was horribly wrong!
2. OpenAI's business model and Google Search's business models are totally different. Apples and oranges. The way that OpenAI monetizes, the technology it uses to both deliver a service AND monetize it, the technology stack, the scaling, even the tech they acquire to build it, just totally different.
Again, if you can find an article that had someone in the 90s or 2000s saying "Google is DOA! Search is stupid!" then I'd really really love to read it, genuinely.