I'm kind of sceptical about the altruistic motives here. Giving this to open source maintainers also solves the problem of identifying high quality feedback/rewards for their rlvr models. With everybody using Claude code it might be difficult for them to find a robust way to tell apart good reward signal from mediocre or below average feedback.
Lex as in Lex Fridman? I'm baffled that anyone would say that Lex Fridman is a better interviewer than Dwarkesh. Fridman is the one who continuously rambles some incoherent nonsense and completely lacks the intelligence and knowledge to ask reasonable questions.
> How do you see the solo bootstrapping landscape going forward? In what ways do you see agentic coding changing things?
It's hard to say. I think it could go either way.
My optimistic take is that it has a similar effect to cloud computing on solo bootstrapping. If you tried to start a SaaS company in 2005, you'd have a hard time because in addition to knowing software, you'd also have to know how to provision servers in a data center, so you didn't see a lot of one-person software success stories from that era. But then with cloud computing, it radically lowered the barrier to entry, and there were lots of one-person SaaS businesses making $1M+/yr.
So, the best case for me is if that AI increases the power of solo bootstrappers even more so if you're great at software but terrible at website design / running ads, you don't have to hire people to help you anymore, and you can achieve more by yourself.
The pessimistic outcome is that it becomes less profitable to be a bootstrapped founder because the reduced barrier to entry means you're competing with 10-100x as many people and that companies are more comfortable building in-house tools with AI rather than purchasing B2B SaaS products.[0]
I actually have a hard time imagining B2B SaaS dying because AI makes it easy to roll your own tools. I feel like even if you reduced dev cost to nearly zero, there's still headache of maintaining an app. Like, for my last business, we were paying $200/mo for HelpScout to manage support emails. If one of the devs said they spun up a reimplementation of HelpScout over the weekend that we could run for $2/mo, I'd still say no because the cost of managing it internally is at least $200/mo of people's focus.
> I actually have a hard time imagining B2B SaaS dying because AI makes it easy to roll your own tools
Agreed. I think software engineers are misunderstanding why the SaaSpocalypse is (has?) happening.
It feels like software engineers think the SaaSpocalypse is due to technical commodity: "Oh no! Claude can bang out a fully functioning Slack/Monday.com over a weekend! There goes Slack/Monday.com"
The selloff is being driven by the "Seat Replacement" fear: SaaS charges "per seat" or "per human user" - but if an agent can do the work of X (>1) humans, then "seats" sold shrink, reducing the valuation and profitability of SaaS companies that are driven by seat multiples.
This has no impact on the valuation or stress of bootstrapped businesses like yours where you don't have to answer to either VCs or shareholders. It's more likely bootstrapped businesses will get a revival as people seek to work closer with founders who are focused on building a sustainable, long term value add than an unsustainable, blitzscaling play.
In fact, if I am not mistaken, it will reduce the edge VC funded companies have over bootstrapped businesses like yours (eg: the CAC is set on a more level field when blitzscaling funds reduce or disappear).
The primary value is accrued by the AI labs. You pay hundreds or thousands of dollars a month to train their AI models. While you probably do increase your productivity saving time typing all the code, the feedback that you give the agent after it has produced mediocre or poor code is extremely valuable to the companies, because they train their reinforcement learning models with them. Now while you're happy you have such a great "assistant" that helps you type out code, you will at some point realize that your architectural/design skills really weren't all that special in the first place. All the models lacked to be good at that was sufficient data containing the correct rewards. Thankfully software engineers are some of the most naive people in the world, and they gave them that data by actually paying for it.
It just boggles my mind that anyone would use something like this. Why would one send their data to some unknown company that internally likely just delegates the work to one of the big AI labs?
[1] https://pine.town/
reply