Hacker Newsnew | past | comments | ask | show | jobs | submit | subhobroto's commentslogin

> I actually have a hard time imagining B2B SaaS dying because AI makes it easy to roll your own tools

Agreed. I think software engineers are misunderstanding why the SaaSpocalypse is (has?) happening.

It feels like software engineers think the SaaSpocalypse is due to technical commodity: "Oh no! Claude can bang out a fully functioning Slack/Monday.com over a weekend! There goes Slack/Monday.com"

The selloff is being driven by the "Seat Replacement" fear: SaaS charges "per seat" or "per human user" - but if an agent can do the work of X (>1) humans, then "seats" sold shrink, reducing the valuation and profitability of SaaS companies that are driven by seat multiples.

This has no impact on the valuation or stress of bootstrapped businesses like yours where you don't have to answer to either VCs or shareholders. It's more likely bootstrapped businesses will get a revival as people seek to work closer with founders who are focused on building a sustainable, long term value add than an unsustainable, blitzscaling play.

In fact, if I am not mistaken, it will reduce the edge VC funded companies have over bootstrapped businesses like yours (eg: the CAC is set on a more level field when blitzscaling funds reduce or disappear).


I honestly think (you and) I'm missing something!

What you say is obvious to me and provably correct. I just can't argue with your statement no matter what and have been making comments in alignment with yours, just with way, way more text.

We are both being downvoted - I can't even see your post (it's so grayed out) and I've started to get those "You're posting too fast. Please slow down. Thanks." myself, that is indicative of post throttling that kicks in on heavy downvotes. It took me 5 retries, waiting ~40 minutes each time and failing between tries to get this to finally post but I waited because this comment was important for me to put out there. I'm not going to stick around to keep waiting, retrying and posting again as I have other things to do in my life, so I will just abandon commenting altogether.

"AI" is changing society fundamentally forever and education needs to change fundamentally with it. I am personally betting that humans in the future, outside extreme niches, are generalists and are augmented by specialist agents.

I have a hypothesis that future software engineers will be a human generalist (the person) augmented by a large diverse group of specialist agents (the tools). The human generalist will keep their specialist agents fine-tuned, trained and upto date so that they generate implementations precisely as the human generalist specifies them.

This won't be limited to software engineering but looking at how well this thought process is polling here on HN, I'll pause :).

Are you and I missing something big or is the bigger crowd in denial?


> This is a bit off topic, but why are used books so expensive on abebooks, thriftbooks, amazon so expensive compared to booksales, etc?

A mix of enforcement, laws and good old market capture.

20 years ago I could bring a suitcase full of brand new books - fiction or not, from India, no questions asked. These are functionally the same as those that cost 50x more in the U.S. - just that they are black and white, paperback and use cheaper recycled paper that will yellow in 20 years and become brittle in 30. I cannot bring them with me anymore. The books clearly say they are not for export especially in the U.S., the print for this uses some kind of ink that shows up on XRays clearly and TSA enforces it.

Similarly, used books in the U.S. are repurchased by bookstores to be sold at a profit.

From what I see students doing - they sell or exchange books over FB Marketplace or school lists or in person.


yea I used a lot of low price editions in college that i would order from ebay etc and would ship from india. Wow i didnt know about the ink x-ray stuff, thats pretty interesting.

what i was referring to though was generic american bestsellers, nothing black/gray market. used books feel very expensive to buy online. my guess is market capture though


Do you mean that if I enter the US with an Indian version of a textbook that it will be confiscated by customs?

That's outrageous.


Unsure if it's for every book. I was bringing in Donald Knuth's The Art of Computer Programming Boxed Set. The books clearly say, across their front, they are not for export especially in the U.S. and E.U., the print for this uses some kind of ink that shows up on XRays clearly and TSA enforces it.

> Nothing strange nor new: the average teacher is reactionary even at top universities, generally incapable of evolving

It feels to me like teaching has always been bandwidth constrained and providing 1:1 feedback to students have always been a bottleneck. I believe that AI agents are the true gateway to fixing that limitation and education should be embracing AI agents to increase bandwidth of 1:1 teacher student interaction.

I worry that everytime I talk to a teacher about how they're adapting to AI, it's almost as if they are trying to figure out how they can continue to use the same teaching techniques that they had seen their teachers practice decades ago.

Printed books are expensive and they should be. We already have paper equivalents that allow highlighting, rewriting, annotating, sharing notes - recyclable materials in all ways superior to paper that can be reused by multiple students - these are things we should be embracing instead of going back to one time use printed materials that are heavy to carry around, take up space in a room and will need to be disposed of soon.

If current technology is creating an issue for teachers - teachers need to pivot, not block current technology.

Society typically cares about work getting done and not much about how it got done - for some reason, teachers are so deep into the weeds of the "how", that they seem to forget that if the way to mend roads since 1926 have been to learn how to measure out, mix and lay asphalt patches by hand, in 2026 when there are robots that do that perfectly everytime, they should be teaching humans to complement those robots or do something else entirely.

> We continue to teach children (at least in the EU) to write by hand, to do calculations manually throughout their entire schooling, when in real life, aside from the occasional scrap note, all writing is done on computers and calculations are done by machine as well. And, of course, no one teaches these latter skills

Is your intuition that the EU will continue down it's path of technical irrevelance? If so, what are the top 5 reasons this is happening?


1:1 teaching is the old tutor method, still the most efficient way to transfer knowledge today, and also the least scalable. LLMs, on the other hand, are a kind of implementation of the Library of Babel or Conrad Gessner's Bibliotheca Universalis (~1545), that is, the way to "tear books apart" (here more generally any text written by humans) in order to then allow extracting only the shred of information we are looking for each time. The idea of seeing them as auto-teachers is therefore very interesting!

I imagine more of a school (not just university) where the frontal lecture has disappeared, replaced by lessons like FLOSS movies projects: the teacher writes the plot, there are narrators, visual content as needed, updated and refined over time. This implies:

- Learners go at their own pace; the brightest will finish sooner and use the time to learn other things, instead of chafing at the bit during class; the less brilliant but still capable ones can succeed with more time instead of wasting time following lectures they don't understand because they lack previous elements they can't acquire in time, lesson by lesson.

- Unfortunately, also less plurality of information, but this is compensated by the fact that the lesson is not by the assigned teacher, but by third parties, a separate FLOSS project indeed, so it is the individual teacher available 1:1 / 1:few in the time freed from frontal lectures who provides plurality.

- Sociality among students remains in a different form: one studies for oneself, tests what has been learned among peers and teachers themselves in lessons that are "presentations" by individual learners to a "class" of learners and teachers, and interaction in this setting reveals gaps and consolidates, shares, and inspires knowledge because this is the only resource that grows with use and is lost otherwise.

This obviously implies substantial digitalization that brings efficiency, documentary culture, the learning organization, and the measurement of learning/results far superior to the measurement of in-person conformity that we know well even at work, where the manager wants conformity, not talent, praises conformity not substantial innovation, creating many imitators cf. https://fs.blog/experts-vs-imitators/

Of course, it is a school where the teacher does research and substantial work instead of repeating the same old stuff every year, and many don't like this. After all, most people don't like to innovate. Improving what exists, yes, but venturing into unknown lands is something most oppose, both the common people who fear change and the ruling class who fear losing their acquired status. In the past, ruling classes sent their many children to explore, and if it went wrong, there were others. Today there is practically no more substantial innovation. People want to deny it, but it has always happened in history, the more it is denied, the more it happens through the interested hands of a few against the interest of the many, and the result is a changing of the guard among those in charge and consequent wars to create a new common people. We should have understood and solved this long ago, but it seems not...


> My take: learning to use AI is not hard. They can do that on their own. Learning programming is hard, and relying on AI will only make it harder

Depends on what your definition of "hard" is - I routinely come across engineers who are frustrated that "AI" hallucinates. Humans can detect hallucinations and I have specific process to detect and address them. I wouldn't call those processes easy - I would say it's as hard as learning how to do integration by summing.

> but you can't use AI in coding effectively if you don't know how to code

Depends on the LLM. I have a fine-tuned version of Qwen3-Coder where if you ask it to show you to compare to strings in C/C++, it will but then it will also suggest you look at a version that takes unicode into account.

I have stumbled across very few software engineers who even know what unicode codepoints are and why legacy ASCII string comparison fails.

> but won't know how to write for loop on their own. Which means they'll be helpless to interpret AI's output or to jump in when the AI produces suboptimal results

That's a very large logical jump. If we went back 20 years, you might come across professors and practising engineers who were losing sleep that languages like C/C++ were abstracting the hardware so much that you could just write for loops and be helpless to understand how those for loops were causing needless CPU wait cycles by blocking the cache line.


> Depends on what your definition of "hard" is - I routinely come across engineers who are frustrated that "AI" hallucinates. Humans can detect hallucinations and I have specific process to detect and address them. I wouldn't call those processes easy - I would say it's as hard as learning how to do integration by summing.

My students don't seem to have a problem using AI: it's quite adequate to the task of completing their homework for them. I therefore don't feel a need to complete my buzzword bingo by promoting an "AI-first classroom." The concern is what they'll do when they find problems more challenging than their homework.

> I have stumbled across very few software engineers who even know what unicode codepoints are and why legacy ASCII string comparison fails.

You are proving my point. If the programmer doesn't know what Unicode is, then the AI's helpful suggestion is likely to be ignored. You need to know enough to be able to make sense of the AI beyond a superficial measure.

> That's a very large logical jump. If we went back 20 years, you might come across professors and practising engineers who were losing sleep that languages like C/C++ were abstracting the hardware so much that you could just write for loops and be helpless to understand how those for loops were causing needless CPU wait cycles by blocking the cache line.

We still teach that stuff. Being an engineer requires understand the whole machine. I'm not talking about mid-level marketroids who are excited that Claude can turn their Excel sheets into PowerPoints. I'm talking about actual engineers who take responsibility for their code. For every helpful suggestion that AI makes, it botches something else. When the AI gives up, where do you turn?


> AI is extremely dangerous for students and needs to be used intentionally

Can you expound on both points in more details please, ideally with some examples?


If a student uses AI to simply code-gen without understanding the code (e.g. in my compilers class if they just generate the recursive-descent parser w/Claude, fixing all the tests) then they are robbing themselves of the opportunity to learn how to code.

In OP I showed an AGENTS.md file I give my students. I think this is using AI in a manner productive for intellectual development.


I asked my students in a take home lab to write tests for a function that computes the Collatz sequence. Half of the class returned AI generated tests that tested the algorithm with floating point and negative numbers (for "correct" results, and not for input validation). I am not doing anything take home anymore.

> then ran interviews with the students about their project work, asking them to explain how it works etc

Was there something fundamentally different from those who used "AI" a "lot" vs those who didn't?

Did they mention the issue of hallucination and how they addressed it?


> At some level, this is a problem of unmotivated students and college mostly being just for signaling as opposed to real education.

I think this is mostly accurate. Schools have been able to say "We will test your memory on 3 specific Shakespeares, samples from Houghton Mifflin Harcourt, etc" - the students who were able to perform on these with some creative dance, violin, piano or cello thrown in had very good chances at a scholarship from an elite college.

This has been working extremely well except now you have AI agents that can do the same at a fraction of the cost.

There will be a lot of arguments, handwringing and excuse making as students go through the flywheel already in motion with the current approach.

However, my bet is it's going to be apparent that this approach no longer works for a large population. It never really did but there were inefficiencies in the market that kept this game going for a while. For one, college has become extremely expensive. Second, globalization has made it pretty hard for someone paying tuition in the U.S. to compete against someone getting a similar education in Asia when they get paid the same salary. Big companies have been able to enjoy this arbitrage for a long time.

> Maybe this institution is outdated. Surely there is a cheaper and more time efficient way to ranking students for companies

Now that everyone has access to labor cheaper than the cheapest English speaking country in the world, humanity will be forced to adapt, forcing us to rethink what has seemed to work in the past


> but if you don't put the work to understand things you'll always be behind people that know at least with a bird eye view what's happening.

Depends. You might end up going quite far without even opening up the hood of a car even when you drive the car everyday and depend on it for your livelihood.

If you're the kind that likes to argue for a good laugh, you might say "well, I don't need to know how my car works as long as the engineer who designed it does or the mechanic who fixes it does" - and this is accurate but it's also accurate not everyone ended up being either the engineer or the mechanic. It's also untrue that if it turned out it would be extremely valuable to you to actually learn how the car worked, you wouldn't put in the effort to do so and be very successful at it.

All this talk about "you should learn something deeply so you can bank on it when you will need it" seems to be a bit of a hoarding disorder.

Given the right materials, support and direction, most smart and motivated people can learn how to get competent at something that they had no clue about in the past.

When it comes to smart and motivated people, the best drop out of education because they find it unproductive and pedantic.


Yes, you can and I know just enough of cars to not be scammed by people, but not to know how the whole engine works, and I also don't think that you should learn everything that you can learn, there's no time for that, that's why I made the bird view comment.

My argument is that when you have at least a basic knowledge of how things work (be it as a musician, a mechanical engineer or a scientist) you are in a much better place to know what you want/need.

That said, smart and motivated people thrive if they are given the conditions to thrive, and I believe that physical interfaces have way less friction than digital interfaces, turning a knob is way less work than clicking a bunch of menus to set up a slider.

If I were to summarize what I think about AI it would be something like "Let it help you. Do not let it think for you"

My issue is not with people using AI as a tool, bit with people delegating anything that would demand any kind of effort to AI


> I find, as a parent, when I talk about it at the high school level I get very negative reactions from other parents. Specifically I want high schoolers to be skilled in the use of AI, and particular critical thinking skills around the tools, while simultaneously having skills assuming no AI. I don’t want the school to be blindly “anti AI” as I’m aware it will be a part of the economy our kids are brought into.

This is my exact experience as well and I find it frustrating.

If current technology is creating an issue for teachers - it's the teachers that need to pivot, not block current technology so they can continue what they are comfortable with.

Society typically cares about work getting done and not much about how it got done - for some reason, teachers are so deep into the weeds of the "how", that they seem to forget that if the way to mend roads since 1926 have been to learn how to measure out, mix and lay asphalt patches by hand, in 2026 when there are robots that do that perfectly every-time, they should be teaching humans to complement those robots or do something else entirely.

It's possible in the past, that learning how to use an abacus was a critical lesson but once calculators were invented, do we continue with two semesters of abacus? Do we allow calculators into the abacus course? Should the abacus course be scrapped? Will it be a net positive on society to replace the abacus course with something else?

"AI" is changing society fundamentally forever and education needs to change fundamentally with it. I am personally betting that humans in the future, outside extreme niches, are generalists and are augmented by specialist agents.


I'm also for education for AI awareness. A big point on teaching kids about AI should also be a lot about how unreliable they can be.

I had a discussion with a recruiter on Friday, and I said I guess the issue with AI vs human is, if you give a human developer who is new to your company tasks, the first few times you'll check their work carefully to make sure the quality is good. After a while you can trust they'll do a good job and be more relaxed. With AI, you can never be sure at any time. Of course a human can also misunderstand the task and hallucinate, but perhaps discussing the issue and the fix before they start coding can alleviate that. You can discuss with an AI as much as you want, but to me, not checking the output would be an insane move...

To return to the point, yeah, people will use AI anyway, so why not teach them about the risks. Also LLMs feel like Concorde: it'll get you to where you want to go very quickly, but at tremendous environmental cost (also it's very costly to the wallet, although the companies are now partially subsidizing your use with the hopes of getting you addicted)..


Only if you naively throw AI carelessly at it. It sounds like you havent mastered the basics like fine-tuning, semantic vector routing, agentic skills/tooling generation…dozens of other solutions that robustly solve for your claim.

Gosh, I really should attend LinkedIn University of Buzzwords...

Yes, just buzzwords, totally no backing behind any of this. Your original comment makes so much more sense now.

everything you learn about math is completely obsoleted by ai five years from now

everything you learn about working using chatbots is completely obsoleted by ai five years from now

both are possible, but 2 is pretty much guaranteed if we get 1, so learning about chatting with opus is pretty much always less useful than learning derivatives by hand unless you're starting job applications in less than a few months


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: