AI agent technology likely isn’t ready for the kind of high-stakes autonomous business work Microsoft is promising.
It's unbelievable to me that tech leaders lack the insight to recognize this.
So how to explain the current AI mania being widely promoted?
I think the best fit explanation is simple con artistry. They know the product is fundamentally flawed and won't perform as being promised. But the money to be made selling the fantasy is simply too good to ignore.
In other words --- pure greed. Over the longer term, this is a weakness, not a strength.
It's part of a larger economic con centered on the financial industry and the financialization of American industry. If you want this stuff to stop, you have to be hoping (or even working toward) a correction that wipes out the incumbents who absolutely are working to maintain the masqerade.
It will hurt, and they'll scare us with the idea that it will hurt, but the secret is that we get to choose where it hurts - the same as how they've gotten to choose the winners and losers for the past two decades.
The author argues that this con has been caused by three relatively simple levers: Low dividend yields, legalization of stock buybacks, and executive compensation packages that generate lots of wealth under short pump-and-dump timelines.
If those are the causes, then simple regulatory changes to make stock buybacks illegal again, limit the kinds of executive compensation contracts that are valid, and incentivize higher dividend yields/penalize sales yields should return the market to the previous long-term-optimized behavior.
I doubt that you could convince the politicians and financiers who are currently pulling value out of a fragile and inefficient economy under the current system to make those changes, and if the changes were made I doubt they could last or be enforced given the massive incentives to revert to our broken system. I think you're right that it will take a huge disaster that the wealthy and powerful are unable to dodge and unable to blame on anything but their own actions, I just don't know what that event might look like.
Genuine question, I don't understand the economics of the stock market and as such I participate very little (probably to my detriment) I sort of figure the original theory went like this.
"We have an idea to run a for profit endeavor but do not have money to set it up. If you buy from us a portion of our future profit we will have the immediate funds to set up the business and you will get a payout for the indefinite future."
And the stock market is for third party buying and selling of these "shares of profit"
Under these conditions are not all stocks a sort of millstone of perpetual debt for the company and it would behoove them to remove that debt, that is, buyback the stock. Naively I assume this is a good thing.
If you don't understand a concept that's part of the stock market, reading the Investopedia article will go a long way. It's a nice site for basic overviews. https://www.investopedia.com/terms/b/buyback.asp
The short answer is that the trend of frequent stock buybacks as discussed here is not being used to "eliminate debt" (restore private ownership), it's being used to puff up the stock price as a non-taxable alternative to dividend payouts (simply increasing the stock price by reducing supply does not realize any gains, while paying stockholders "interest" directly is subject to income tax). This games the metric of "stock price", which is used as a proxy for all sorts of things including executive performance and compensation.
My view is that you don't want more layers. Chasing ever increasing share prices favor shareholders (limited amount of generally rich people) over customers (likely to be average people). The incentives get out of whack.
I disagree. Those place the problem at the corporate level, when it's clearly extended through to being a monetary issue. The first thing I would like to see is the various Fed and banking liquidity and credit facilities go away. They don't facilitate stability, but a fiscal shell game that has allowed numerous zombie companies to live far past their solvency. This in turn encourages widespread fiscal recklessness.
We're headed for a crunch anyway. My observation is that a controlled demolition has been attempted several times over the past few years, but in every instance, someone has stepped up to cry about the disaster that would occur if incumbents weren't shored up. Of course, that just makes the next occurrence all the more dire.
Stupidity, greed, and straight-up evil intentions do a bunch of the work, but ultimately short-term thinking wins because it's an attractor state. The influence of the wealthy/powerful is always outsized, but attractors and common-knowledge also create a natural conspiracy that doesn't exactly have a center.
So with AI, the way the natural conspiracy works out is like this. Leaders at the top might suspect it's bullshit, but don't care, they always fail upwards anyway. Middle management at non-tech companies suspect their jobs are in trouble on some timeline, so they want to "lead a modernization drive" to bring AI to places they know don't need it, even if it's a doomed effort that basically defrauds the company owners. Junior engineers see a tough job market, want to devalue experience to compete.. decide that only AI matters, everything that came before is the old way. Owners and investors hate expensive senior engineers who don't have to bow and scrape, think they have to much power, would love to put them in their place. Senior engineers who are employed and maybe the most clear-eyed about the actual capabilities of technology see the writing on the wall.. you have to make this work even if it's handed to you in a broken state, because literally everyone is gunning for you. Those who are unemployed are looking around like well.. this is apparently the game one must play. Investors will invest in any horrible doomed thing regardless of what it is because they all think they are smarter than other investors and will get out in just in time. Owners are typically too disconnected from whatever they own, they just want to exit/retire and already mostly in the position of listening to lieutenants.
At every level for every stakeholder, once things have momentum they don't need be a healthy/earnest/noble/rational endeavor any more than the advertising or attention economy did before it. Regardless of the ethics there or the current/future state of any specific tech.. it's a huge problem when being locally rational pulls us into a state that's globally irrational
Yes, that "attractor state" you describe is what I meant by "if the changes were made I doubt they could last or be enforced given the massive incentives to revert to our broken system". The older I get and the more I learn, the less I'm willing to ascribe faults in our society to individual evils or believe in the existence of intentionally concealed conspiracies rather than just seeing systemic flaws and natural conspiracies.
There was a long standing illusion that people care about long-term thinking. But given the opportunity, people seem to take the short-term road with high risks, instead of chasing a long-term gain, as they, themselves, might not experience the gain.
The timeframe of expectations have just shifted, as everyone wants to experience everything. Just knowing the possibility of things that can happen already affects our desires. And since everyone has a limited time in life, we try to maximize our opportunities to experience as many things as possible.
It’s interesting to talk about this to older generation (like my parents in their 70s), because there wasn’t such a rush back then. I took my mom out to some cities around the world, and she mentioned how she really never even dreamed of a possibility of being in such places. On the other hand, when you grow in a world of technically unlimited possibilities, you have more dreams.
Sorry for rambling, but in my opinion, this somewhat affects economics of the new generation as well. Who cares of long term gains if there’s a chance of nobody experiencing the gain, might as well risk it for the short term one for a possibility of some reward.
> correction that wipes out the incumbents who absolutely are working to maintain the masqerade
You need to also have a robust alternative that grows quickly in the cleared space. In 2008 we got a correction that cleared the incumbents, but the ensuing decade of policy choices basically just allowed the thing to re-grow in a new form.
I thought we pretty explicitly bailed out most of the incumbents. A few were allowed to be sacrificed, but most of the risk wasn't realized, and instead rolled into new positions that diffused it across the economy. 2008's "correction" should have seen the end of most of our investment banks and auto manufacturers. Say what you want to about them (and I have no particular love for either), but Tesla and Bitcoin are ghosts of the timeline where those two sectors had to rebuild themselves from scratch. There should have been more, and Goldman Sachs and GM et al. should not currently exist.
> A few were allowed to be sacrificed, but most of the risk wasn't realized, and instead rolled into new positions that diffused it across the economy.
Yeah that's a more accurate framing, basically just saying that in '08 we put out the fire and rehabbed the old growth rather than seeding the fresh ground.
> Tesla and Bitcoin are ghosts of the timeline where those two sectors had to rebuild themselves from scratch
I disagree, I think they're artifacts of the rehab environment (the ZIRP policy sphere). I think in a world where we fully ate the loss of '08 and started in a new direction you might get Tesla, but definitely not TSLA, and the version we got is really (Tesla+TSLA) IMO. Bitcoin to me is even less of a break with the pre-08 world; blockchain is cool tech but Bitcoin looks very much "Financial Derivatives, Online". I think an honest correction to '08 would have been far more of a focus on "hard tech and value finance", rather than inventing new financial instruments even further distanced from the value-generation chain.
> Goldman Sachs and GM et al. should not currently exist.
I would say yes and no on Tesla. Entities that survived becaue of the rehab environment actually expected it to fail, and shorted it heavily. TSLA as it currently exists is a result of the short squeeze on the stock that ensued when it became clear that the company was likely to become profitable. Its current, ridiculous valuation isn't a product of its projected earnings, but recoil from those large shorts blowing up.
In our hypothetical alternate timeline, I imagine that there would have still been capital eager to fill the hole left by GM, and possibly Ford. Perhaps Tesla would have thrived in that vacuum, alongside the likes of Fisker, Mullen, and others, who instead faced incumbent headwinds that sunk their ventures.
Bitcoin, likewise, was warped by the survival of incumbents. IIUC, those interests influenced governance in the early 2010s, resulting in a fork of the project's original intent from a transactional medium that would scale as its use grew, to a store of value, as controlled by them as traditional currencies. In our hypothetical, traditional banks collapsed, and even survivors lost all trust. The trustless nature of Bitcoin, or some other cryptocurrency, maybe would have allowed it to supercede them. Deprived of both retail and institutional deposits, they simply did not have the capital to warp the crypto space as they did in the actual 2010s.
I call them "ghosts" because, yes, whatever they might have been, they're clearly now just further extensions of that pre-2008 world, enabled by the our post-2008 environment (including ZIRP).
"In 2008 we got a correction that cleared the incumbents,"
I thought in 2008 we told the incumbents "you are the most important component of our economy. We will allow everybody to go down the drain but you. That's because you caused the problem, so you are the only ones to guide us out of it"
Looking forward to the OpenAI (and Anthropic) IPOs. It’s funny to me that this info is being “leaked” - they are sussing out the demand. If they wait too long, they won’t be able to pull off the caper (at these valuations). And we will get to see who has staying power.
It’s obvious to me that all of OpenAIs announcements about partnerships and spending is gearing up for this. But I do wonder how Altman retains the momentum through to next year. What’s the next big thing? A rocket company?
I tend to agree, but there's something to be said for a retribution focus taking time and energy away from problem-solving. When market turmoil hits, stand up facilities to guarantee food and healthcare access, institute a nationwide eviction moratorium, and then let what remains of the free market play out. Maybe we pursue justice by actually prosecuting corporate malfeasance this time. The opposite of 2008.
Problem with "it will hurt" is that it will actually hurt middle class by completely wiping it out, and maybe slightly inconvenience the rich. More like annoy the rich, really.
I have thought about stopping the use of all tech leaders: only use LLM access by running locally and Huggingface, only use a small 3rd party email provider, just use open source, and only social media use is via Mastodon.
What would be the effect? Ironically, more productive?
I am pissed at Microsoft now because my family plan for Office365 is set to renew and they are tagging on a surcharge of $30 for AI services I don’t want. What assholes: that should be a voluntary add on.
EDIT: I tried to cancel my Office365 plan, and they let me switch to a non-AI plan for the old price. I don’t hate them anymore.
Yeah, it started with the whole Wall Street, with all the depression and wars that it brought, and it hasn't stopped, at each cycle the curve has to go up, with exponential expectations of growth, until it explodes taking the world economy to the ground.
How do you guarantee your accelerationism produces the right results after the collapse? If the same systems of regulation and power are still in place then it would produce the same result afterwards
Don’t attribute to malice that which can equally be contributed to incompetence.
I think you’re over-estimating the capabilities of these tech leaders, especially when the whole industry is repeating the same thing. At that point, it takes a lot of guts to say “No, we’re not going to buy into the hype, we’re going to wait and see” because it’s simply a matter of corporate politics: if AI fails to deliver, it fails to deliver for everyone and the people that bought into the hype can blame the consultants / whatever.
If, however, AI ended up delivering and they missed the boat, they’re going to be held accountable.
It’s much less risky to just follow industry trends. It takes a lot of technical knowledge, gut, and confidence in your own judgement to push back against an industry-wide trend at that level.
I suspect that AI is in an "uncanny valley" where it is definitely good enough for some demos, but will fail pretty badly when deployed.
If it works 99% of the time, then a demo of 10 runs is 90% likely to succeed. Even if it fails, as long as it's not spectacular, you can just say "yeah, but it's getting better every day!", and "you'll still have the best 10% of your human workers in the loop".
When you go to deploy it, 99% is just not good enough. The actual users will be much more noisy than the demo executives and internal testers.
When you have a call center with 100 people taking 100 calls per day, replacing those 10,000 calls with 99% accurate AI means you have to clean up after 100 bad calls per day. Some percentage of those are going to be really terrible, like the AI did reputational damage or made expensive legally binding promises. Humans will make mistakes, but they aren't going to give away the farm or say that InsuranceCo believes it's cheaper if you die. And your 99% accurate-in-a-lab AI isn't 99% accurate in the field with someone with a heavy accent on a bad connection.
So I think that the parties all "want to believe", and to an untrained eye, AI seems "good enough" or especially "good enough for the first tier".
A big task my team did had measured accuracy in the mid 80% FWIW.
I think the line of thought in this thread is broadly correct. The most value I’ve seen in AI is problems where the cost of being wrong is low and it’s easy to verify the output.
I wonder if anyone is taking good measurements on how frequently an LLM is able to do things like route calls in a call center. My personal experience is not good and I would be surprised if they had 90% accuracy.
>I suspect that AI is in an "uncanny valley" where it is definitely good enough for some demos
Sort of a repost on my part, but the LLM's are all really good at marketing and other similar things that fool CEO's and executives. So they think it must be great at everything.
> if AI fails to deliver, it fails to deliver for everyone and the people that bought into the hype can blame the consultants / whatever.
Understatement of the year. At this point, if AI fails to deliver, the US economy is going to crash. That would not be the case if executives hadn't bought in so hard earlier on.
Yep, either way things are going to suck for ordinary people.
My country has had bad economy and high unemployment for years, even though rest of the world is doing mostly OK. I'm scared to think what will happen once AI bubble either bursts or eats most white collar jobs left here.
> Don’t attribute to malice that which can equally be contributed to incompetence.
This discourse needs to die. Incompetence + lack of empathy is malice. Even competence in the scenario they want to create is malice. It's time to stop sugar-coating it.
I keep fighting this stupid platitude [0]. By that logic, I fail to find anything malicious. Everything could be explained by incompetence, stupidity etc.
> At that point, it takes a lot of guts to say “No, we’re not going to buy into the hype, we’re going to wait and see” because it’s simply a matter of corporate politics
Isn't that the whole mythos of these corporate leaders though? They are the ones with the vision and guts to cut against the fold and stand out among the crowd?
I mean it's obviously bullshit, but you would think at least a couple of them actually would do something to distinguish themselves. They all want to be Steve Jobs but none of them have the guts to even try to be visionary. It is honestly pathetic
What you have is a lot of middle managers imposing change with random fresh ideas. The ones that succeed rise up the ranks. The ones that failed are forgotten, leading to survivorship bias.
Ultimately it's a distinction without a difference. Maliciously stupid or stupidly malicious invariably leads to the same place.
The discussion we should be having is how we can come together to remove people from power and minimize the influence they have on society.
We don't have the carbon budget to let billionaires who conspires from island fortresses in Hawaii do this kind of reckless stuff.
It's so dismaying to see these industries muster the capital and political resources to make these kinds of infrastructure projects a reality when they've done nothing comparable w.r.t to climate change.
It tells me that the issue around the climate has always been a lack of will not ability.
Pure greed would have a strong incentive to understand what the market is actually demanding in order to maximize profits.
These attempts to try to steer demand despite clear indicators that it doesn't want to go in that direction aren't just driven by greed, they're driven by abject incompetence.
Also, if the current level of AI investment and valuations aren't justified by market demand (I believe so), many of these people/companies are getting more money than they would without the unreasonable hype.
> Pure greed would have a strong incentive to understand what the market is actually demanding in order to maximize profits.
Not necessarily, just look at this clip [1] from Margin Call, an excellent movie on the GFC. As Jeremy Irons is saying in that clip, the market (as usually understood in classical economy, with producers making things for clients/customers to purchase) is of no importance to today's market economy, almost all that matters, at the hundreds of billions - multi-trillion dollars-levels, is for your company "to play the music" as best as the other (necessarily very big) market participants, "nothing more, nothing less" (again, to quote Irons in that movie).
There's nothing in it about "making what people/customers want" and all that, which is regarded as accessory, that is if it is taken into consideration at all. As another poster is mentioning in this thread, this is all the direct result of the financialization of much of the Western economy, this is how things work at this level, given these (financiliazed) inputs.
you seem to be committing the error of believing that the problem here is just that they’re not selling what people want to buy, instead of identifying the clear intention to _create_ the market.
No, it's greed right now. They are fundamentally incapable of considering consequences beyond the immediate term.
If the kind of foresight and consideration you suggest were possible, companies wouldn't be on this self-cannibalizing path of exploiting customers right now for every red cent you can squeeze out of them. Long term thinking would very clearly tell you that abusing your customers and burning all the goodwill the company built over a hundred years is idiotic beyond comparison. If you think about anything at all other than tomorrow's bottom line you'd realize that the single best way to make a stable long-term business is to treat your customers with respect and build trust and loyalty.
But this behavior is completely absent in today's economy. Past and future don't matter. Getting more money right now is the only thing they're capable of seeing.
Given that they aren’t meeting their sales targets at all, I guess that’s a little bit of encouraging about the discernment of their customers. I’m not sure how Microsoft has managed to escape market discipline for so long.
> I’m not sure how Microsoft has managed to escape market discipline for so long.
How would they? They are a monopoly, and partake in aggressive product bundling and price manipulation tactics. They juice their user numbers by enabling things in enterprise tenants by default.
If a product of theirs doesn't sell, they bundle it for "free" in the next tier up of license to drive adoption and upgrades. Case in point, the InTune suite (includes EntraID P2, Remote assistance, endpoint privilege management) will now be included in E5, and the price of E5 is going up (by $10/user/month, less than the now bundled features cost when bought separately). People didn't buy it otherwise, so now there's an incentive to move customers off E3 and into E5.
Now their customers are in a place where Microsoft can check boxes, even if the products aren't good, so there's little incentive to switch.
Try to price out Google Workspace (and also, an office license still because someone will need Excel), Identity, EDR, MDM for Windows, mac, mobile, slack, VoIP, DLP, etc. You won't come close to Microsoft's bundled pricing by piecing together the whole M365 stack yourself.
So yeah, they escape market discipline because they are the only choice. Their customers are fully captive.
Their customers largely aren't their users. Their customers are the purchasing departments at Dell, Lenovo, and other OEMs. Their customers are the purchasing departments at large enterprises who want to buy Excel. Their customers are the advertisers. The products where the customers and the users are the same people (Excel, MS flight simulator, etc.) tend to be pretty nice. The products where the customers aren't the users inevitably turn to shit.
Not really. It's just that the point you have to push people to get them to start pushing back on something tends to be quite high. And it's very different for different people on different topics.
In the past this wasn't such a big deal because businesses weren't so large or so frequently run by myopic sociopaths. Ebenezer Scrooge was running some small local business, not a globe spanning empire entangling itself with government and then imposing itself on everybody and everything.
Scrooge is a fictional person and Microsoft have been getting away with it since I’m alive with people hating it probably just as long.
So I think GP definitely has a point.
Are you a fan of reading? Good character fiction is based on reality as understood at a time and a great way to get insights into how and what people think, particularly as it's precisely those believable portrayals that tend to 'stick' with society. For example even most of George R. R. Martin's tales are directly inspired by real things, very much living up to the notion that reality is much stranger than fiction! Or similarly, read something like Dune and the 60s leaks into it hard.
In modern times the tale of Scrooge probably wouldn't really resonate, nor 'stick', because we transitioned to a culture of worshiping wealth, consumerism, and materialism. See (relevant to this topic) how many people defend unethical actions by claiming that fiduciary duty precludes any value beyond greed. In the time of Scrooge this was not the case, and so it was a more viable cautionary tale that strongly resonated.
I think we would agree on a lot of things over a beer or beverage of your choice.
I also think that we as a (globalised?) culture have decided that money trumps everything.
But I don’t think that it’s the “fault” of single sociopaths or big companies; it’s some inherent flaw in human intelligence - we’re just not equipped to make smart long-term decisions or deal with a vast alien intelligence such as “the market”.
Scrooges tale just resonates strongly - why else would it be still so popular that basically everyone know it - we just aren’t able to stop this machine and it will grind on until our species is wiped from the planet.
Not that it matters too much for you and me - but a thousand years more of this? I can’t imagine what that would look like.
Edit: you ever read “The Jungle” by Upton Sinclair? I don’t think being greedy above all morals is a new thing, it’s always been there. We just “scaled up”
The thing is, the shift happened relatively recently. This [1] is an extremely interesting little report from UCLA where they poll the incoming class on a wide array of things. And there have been some massive shifts as recently as the 60s.
In 1967 86% of student felt it was "essential" or "very important" to develop a meaningful philosophy of life, while only 42% felt the same of "being very well of financially." By 2015 those values had essentially flipped, with only 47% viewing a life philosophy as very important, and 82% viewing being financially well off as very important. It's rather unfortunate it only began in 1967, because I think we would see an even more extreme flip if we were able to just go back a decade or two more.
So we went from a society where the most important value for people was developing a life philosophy to one where the most important value is becoming wealthy. It's fairly easy to see how that leads directly to the situation we see now a days. In a society where wealth is seen as literally the most important aspect in life, what else can we expect other than endless greed?
---
In the time of The Jungle, that book resulted in dramatic change that eventually led to the creation of the FDA. In modern times you have that same FDA approving everything from pink slime [2] to Alzheimer's drugs that they know don't work. [3] I think the fact that society has changed is an inescapable conclusion
Do they think or do they know? I thought that Microsoft was over after the complete failure of Windows phone and Windows 8 and Office ribbon. And that was 20 years ago...
Subpar companies selling subpar products can be massively successive, because they know their customers.
People think that because AI cannot replace a senior dev, it's a worthless con.
Meanwhile, pretty much every single person in my life is using LLMs almost daily.
Guys, these things are not going away, and people will pay more money to use them in future.
Even my mom asks ChatGPT to make a baking applet with a picture she uploads of the recipe, that creates a simple checklist for adding ingredients (she forgets ingredients pretty often). She loves it.
This is where LLMs shine for regular people. She doesn't need it to create a 500k LOC turn-key baking tracking SaaS AWS back-end 5 million recipes on tap kitchen assistant app.
Yeah, she is, because when reality sets in, these models will probably have monthly cellphone/internet level costs. And training is the main money sink, whereas inference is cheap.
500,000,000 people paying $80/mo is roughly a 5-yr ROI on a $2T investment.
I cannot believe on a tech forum I need to explain the "Get them hooked on the product, then jack up the price" business model that probably 40% of people here are kept employed with.
Right now they are (very successfully) getting everyone dependent on LLMs. They will pull rug, and people will pay to get it back. And none of the labs care if 2% of people use local/chinese models.
> And training is the main money sink, whereas inference is cheap.
False. Training happens once for a time period, but inference happens again and again every time users use the product. Inference is the main money sink.
"according to a report from Google, inference now accounts for nearly 60% of total energy use in their AI workloads. Meta revealed something even more striking: within their AI infrastructure, power is distributed in a 10:20:70 ratio among experimentation, training, and inference respectively, with inference taking the lion’s share."
Companies currently are being sold that they can replace employees with little agents that cost $20 to $200 a month.
But then they realize that the $200 last for about 3.5 hours on day 1 of the month and the rest will be charged by the token. Which will then cost as much or more than the employee did, but with a nice guaranteed quota of non determinism and failure rate included.
I personally don't know a single person that would pay $80 for some LLM.
Most people i know pay nothing, or got a 1 year sub of a phone purchase or similar.
Also, everyone here conveniently always forgets the huge hardware and datacenter upfront investment that MS have already made. That cost alone will never be recouped with current prices.
If you can't even run the thing close to profitable, then how will you ever actually profit?
But don't worry guys, your robotaxi will recoup your tesla purchase within a year while you sleep.
The problem is when paying $20 or $40 a month is what's expected to pay for inference that costs $50 or $80 a month to provide. Electricity is not going to get cheaper.
Or better yet, you just need 100 people paying 400 million each to get the same amount!
> "Get them hooked on the product, then jack up the price"
That only works if the product is actually good. The average person isn't going to be paying EIGHTY dollars a month to generate recipes or whatever, that's just delusional
I think there are 2 things at play here. LLMs are, without a doubt, absolutely useful/helpful but they have shortcomings and limitations (often worth the cost of using). That said, businesses trying to add "AI" into their products have a much lower success rate than LLM-use directly.
I dislike almost every AI feature in software I use but love using LLMs.
This false dichotomy is still frustratingly all over the place. LLMs are useful for a variety of benign everyday use cases, that doesn't mean that they can replace a human for anything. And if those benign use cases is all they're good at, then the entire AI space right now is maybe worth $2B/year, tops. Which is still a good amount of money! Except that's roughly the amount of money OpenAI spends every minute, and it's definitely not "the next invention of fire" like Sam Altman says.
Even these everyday use-cases are infinitely varied and can displace entire industries. E.g. ChatGPT helped me get $500 in airline delay compensation after multiple companies like AirHelp blew me off: https://news.ycombinator.com/item?id=45749803
This single niche industry as a whole is probably worth billions alone.
Now multiply that by the number of niches that exist in this world.
The consider the entire universe of formal knowledge work, where large studies (from self-reported national surveys to empirical randomized controlled trials on real-world tasks) have already shown significant productivity boosts, in the range of 30%. Now consider their salaries, and how much companies would be willing to pay to make their employees more productive.
Sure, as a search engine replacement it's totally fine and works reasonably well, but this is also because Google as search engine has regressed dramatically since it's aggressively pushing products at the top of the search results instead of answering questions.
But "a slightly better search engine" sounds much less interesting to investors than "will completely transform human civilization" ;)
That was the internet, search engines were just the logical consequence to make the information stored in the internet more accessible. And today's AI is also more or less 'just' a lossy compressing of the information that was accumulated on the internet over the last 50 years. Without internet no AI.
It's exactly the same situation as Tesla "self driving". It's sold and marketed in no uncertain terms, VERY EXPLICITLY that AI will replace senior devs.
As you admit, it can't do that. And everyone involved knows it.
Are your mother's cooking recipes gonna cover the billions and even trillions being spent here? I somehow doubt that, and it's funny to me that the killer usecase the hypesters use is stupid inane shit like this (no offense to your mom, but a recipe generator isn't something we should be speedrunning global economic collapse for)
> So how to explain the current AI mania being widely promoted?
Probably individual actors have different motivations, but let's spitball for a second:
- LLMs are genuinely a revolution in natural language processing. We can do things now in that space that were unthinkable single-digit years ago. This opens new opportunity spaces to colonize, and some might turn out quite profitable. Ergo, land rush.
- Even if the new spaces are not that much of a value leap intrinsically, some may still end up obsoleting earlier-generation products pretty much overnight, and no one wants to be the next Nokia. Ergo, defensive land rush.
- There's a non-zero chance that someone somewhere will actually manage to build the tech up into something close enough to AGI to serve, which in essence means deprecating the labor class. The benefits (to that specific someone, anyway...) would be staggering enough to make that a goal worth pursuing even if the odds of reaching it are unclear and arguably quite low.
- The increasingly leveraged debt that's funding the land rush's capex needs to be paid off somehow and I'll venture everyone knows that the winners will possibly be able to, but not everyone will be a winner. In that scenario, you really don't want to be a non-winner. It's kind of like that joke where you don't need to outrun the lions, you only need to outrun the other runners, except in this case the harder everyone runs and the bigger the lions become. (Which is a funny thought now, sure, but the feasting, when it comes, will be a bloodbath.)
- A few, I'll daresay, have perhaps been huffing each other's farts too deep and too long and genuinely believe the words of ebullient enthusiasm coming out of their own mouths. That, and/or they think everyone's job except theirs is simple actually, and therefore just this close to being replaceable (which is a distinct flavor of fart, although coming from largely the same sources).
So basically the mania is for the most part a natural consequence of what's going on in the overlap of the tech itself and the incentive structure within which it exists, although this might be a good point to remember that cancer and earthquakes too are natural. Either way, take care of yourselves and each other, y'all, because the ride is only going to get bouncier for a while.
>So how to explain the current AI mania being widely promoted?
CEOs have been sold on the ludicrous idea that "AI" will replace 60-80% of their total employee headcount over the next 2-3 years. This is also priced into current equity valuations.
I think on some level it is being done on the premise that further advancement requires an enormous capital investment and if they can find a way to fund that with today’s sales it will give the opportunity for the tech to get there (quite a gamble).
It's not just AI mania, it's been this way for over a decade.
When I first started consulting, organizations were afraid enough of lack of ROI in tech implementations that projects needed an economic justification in order to be approved.
Starting with cloud, leadership seemed so become rare, and everything was "us too!".
After cloud it was data/data visualization, then it was over-hiring during Covid, the it was RTO, and now it's AI.
I wonder if we will ever return to rationalization? The bellwether might be Tesla stock price (at a rational valuation).
If rationalization comes back, everyone will talk like in Michael Moore’s documentary about GM and Detroit. A manager’s salary after half a career will be around $120k, like in an average bank, and that would be succeeding. I don’t think we even imagine how much of a tsunami we’ve been surfing since 2000.
The cost of the boat sinking is also very high and that’s looking like the more likely scenario. Watching your competitors sink huge amounts of capital into a probably sinking boat is a valid strategy. The growth path they were already on was fine no?
_Number would not go up sufficiently steeply_, would be the major concern, not collapse. Microsoft might end up valued as (whisper it) a normal mature stable company. That would be something like a quarter to a half what it's currently valued. For someone paid mostly in options, this is clearly a problem (and people at the top in these companies mostly _are_ compensated with options, not RSUs; if the stock price halves, they get _nothing_).
It's not "pure greed." It's keeping up with the Joneses. It's fear.
There are three types of humans: mimics, amplifiers, originators. ~99% of the population are basic mimics, and they're always terrified - to one degree or another - of being out of step with the herd. The hyper mimicry behavior can be seen everywhere and at all times, from classrooms to Tiktok & Reddit to shopping behaviors. Most corporate leadership are highly effective mimics, very few are originators. They desperately herd follow ('nobody ever got fired for buying IBM').
This is the dotcom equivalent of every business must be e and @ ified (the advertising was aggressively targeted to that at the time). 1998-2000, you must be e ready. Your hotdog stand must have its own web site.
At this point, the people in charge have signed off on so much AI spending that they need it to succeed, otherwise they are the ones responsible for massive losses.
I have a feeling that Microsoft is setting themselves up for a serious antitrust lawsuit if they do what they are intending on. They should really be careful about introducing products into the OS that take away from all other AI shops. I fear this would cripple innovation if allowed to do so as well, since Microsoft has drastically fatter wallets than most of their competition.
Corruption is indeed going strong in the current corporate-controlled US group of lame actors posing as government indeed. At the least Trump is now regularly falling asleep - that's the best example that you can use any surrogate puppet and the underlying policies will still continue.
If I mention a president who was more of a general secretary of the party, taking notes of decisions taken for him by lobbies from the largest corporations, falling asleep and having incoherent speech to the point that he seems to be way past the point of stroke, I don’t think anyone will guess Trump.
Trump has ushered in a truly lawless phase of american politics. I mean, it was kind of bad before, but at least there was a pretense of rule of law. A trillion dollar company can easily just buy its way out of any enforcement of such antitrust action.
I was just in a thread yesterday with someone who genuinely believed that we're only seeing the beginnings of what the current breed of AI will get us, and that it's going to be as transformative as the introduction of the internet was.
Everything about the conversation felt like talking to a true believer, and there's plenty out there.
It's the hopes and dreams of the Next Big Thing after blockchain and web3 fell apart and everyone is desperate to jump on the bandwagon because ZIRP is gone and everyone who is risk averse will only bet on what everyone else is betting on.
Thus, the cycle feeds itself until the bubble pops.
1) We have barely scratched the surface of what is possible to do with existing AI technology.
2) Almost all of the money we are spending on AI now is ineffectual and wasted.
---
If you go back to the late 1990s, that is the state that most companies were at with _computers_. Huge, wasteful projects that didn't improve productivity at all. It took 10 years of false starts sometimes to really get traction.
It's interesting to think Microsoft was around back then too, taking approximately 14 years to regain the loss of approximately 58% of their valuation.
I don't see how people don't see it. LLMs are a revolutionary technology and are for the first time since the iPhone are changing how we interact with computers. This isn't block chains. This is something we're going to use until something better replaces it.
I agree to some extent, but we’re also in a bubble. It seems completely obvious that huge revenue numbers aren’t around the corner, not enough to justify the spend.
> "someone who genuinely believed that we're only seeing the beginnings of what the current breed of AI will get us, and that it's going to be as transformative as the introduction of the internet was."
I think that. It's new technology and it always takes some years before all the implications and applications of new technology are fully worked out. I also think that we're in a bubble that will hose a lot of people when it pops.
AI research has always been a series of occasional great leaps between slogs of iterative improvements, from Turing and Rosenblatt to AlexNet and GPT-3. The LLM era will result in a few things becoming invisible architecture* we stop appreciating and then the next big leap starts the hype cycle anew.
*Think toll booths (“exact change only!”) replaced by automated license plate readers in just the span of a decade. Hardly noticeable now.
I mean, see Windows Vista. It was eventually patched up to the point where it was semi-usable (and then quietly killed off), but on introduction it was a complete mess. But... something had to be shipped, and this was something, so it was shipped.
(Vista wasn't the only one; Windows ME never even made it to semi-usable, and no-one even remembers that Windows 8 _existed_.)
Microsoft has _never_, as far as I know, been a company to be particularly concerned about product quality. The copilot stuff may be unusually bad, but it's not that aberrant for MS.
US technocapitalism is built on the premise of technological innovation driving exponential growth. This is why they are fixated on whatever provides an outlook for that. The risk that it might not work out is downplayed, because (a) they don’t want to hazard not being at the forefront in the event that it does work out, and (b) if it doesn’t work out, nobody will really hold them accountable for it, not the least because everybody does it.
After the mobile and cloud revolution having run out of steam, AI is what promises most growth by far, even if it is a dubious promise.
It’s a gamble, a bet on “the next big thing”. Because they would never be satisfied with there not being another “big thing”, or not being prominently part of it.
It's not "fundamentally flawed". It is brilliant at what it does. What is flawed is how people are applying it to solve specific problems. It isn't a "do anything" button that you can just push. Every problem you apply AI to still has a ton of engineering work that needs to be done to make it useful.
I thought this for a while, but I've also been thinking about all the stupid, false stuff that actual humans believe. I'm not sure AI won't get to a point where even if it's not perfect it's no worse than people are about selectively observing policies, having wrong beliefs about things, or just making something up when they don't know.
> Every problem you apply AI to still has a ton of engineering work that needs to be done to make it useful.
Ok, but that isn't useful to me. If I have to hold the bot's hand to get stuff done, I'll just do it myself, which will be both faster and higher quality.
That’s not my experience at all, I’m getting it done much faster and the quality is on par. It’s hard to measure, but as a small business owner it’s clear to me that I now require fewer new developers.
You’re correct, you need to learn how to use it. But for some reason HN has an extremely strong anti-AI sentiment, unless it’s about fundamental research.
At this point, I consider these AI tools to be an invaluable asset to my work in the same way that search engines are. It’s integrated into my work. But it takes practice on how to use it correctly.
I think what it comes down to is that the advocates making false claims are relatively uncommon on HN. So, for example, I don't know what advocates you're talking about here. I know people exist who say they can vibe-code quality applications with 100k LoC, or that guy at Anthropic who claims that software engineering will be a dead profession in the first half of '26, and I know that these people tend to be the loudest on other platforms. I also know sober-minded people exist who say that LLMs save them a few hours here and there per week trawling documentation, writing a 200 line SQL script to seed data into a dev db, or finding some off-by-one error in a haystack. If my main or only exposure to AI discourse was HN, I would really only be familiar with the latter group and I would interpret your comment as very biased against AI.
Alternatively, you are referring to the latter group and, uh, sorry.
The whole point I tried to make when I said “you need to learn how to use it” is that it’s not vibe coding. It has nothing to do with vibes. You need to be specific and methodological to get good results, and use it for appropriate problems.
I think the AI companies have over-promised in terms of “vibe” coding, as you need to be very specific, not at all based on “vibes”.
I’m one of those advocates for AI, but on HN it consistently gets downvoted no matter how I try to explain things. There’s a super strong anti-AI sentiment here.
My suspicion is because they (HN) are very concerned this technology is pushing hard into their domain expertise and feel threatened (and, rightfully so).
While it will suck when that happens (and inevitably it will), that time is not now. I'm not one to say LLMs are useless, but they aren't all they're being marketed to be.
There is no scenario where AI is a net benefit. There are three possibilities:
1. AI does things we can already do but cheaper and worse.
This is the current state of affairs. Things are mostly the same except for the flood of slop driving out quality. My life is moderately worse.
2. Total victory of capital over labor.
This is what the proponents are aiming for. It's disastrous for the >99% of the population who will become economically useless. I can't imagine any kind of universal basic income when the masses can instead be conveniently disposed of with automated killer drones or whatever else the victors come up with.
3. Extinction of all biological life.
This is what happens if the proponents succeed better than they anticipated. If recursively self-improving ASI pans out then nobody stands a chance. There are very few goals an ASI can have that aren't better accomplished with everybody dead.
What is the motivation for killing off the population in scenario 2? That's a post-scarcity world where the elites can have everything they want, so what more are they getting out of mass murder? A guilty conscience, potentially for some multiple of human lifespans? Considerably less status and fame?
Even if they want to do it for no reason, they'll still be happier if their friends and family are alive and happy, which recurses about 6 times before everybody on the planet is alive and happy.
It's not a post-scarcity world. There's no obvious upper bound on resources AGI could use, and there's no obvious stopping point where you can call it smart enough. So long are there are other competing elites the incentive is to keep improving it. All the useless people will be using resources that could be used to make more semiconductors and power plants.
It's unbelievable to me that tech leaders lack the insight to recognize this.
So how to explain the current AI mania being widely promoted?
I think the best fit explanation is simple con artistry. They know the product is fundamentally flawed and won't perform as being promised. But the money to be made selling the fantasy is simply too good to ignore.
In other words --- pure greed. Over the longer term, this is a weakness, not a strength.