Try to get a Google Vertex API key working locally. It's even more complicated. Took me literally one full day to get the whole toolchain working (had to do some pauses out of frustration).
I only went through it because I got once 300 USD for free to spend on my Google Workspace account I/my business owns.
OpenAI API usage is so much easier.
Btw Google: Fix Google Console API usage dashboard... why is there a delay of 2+ days? Why cannot I see (and block!) the usage of the current day?
offtopic but it hurts my eyes: I dislike for their font choice and their "cool looks" in their graphics.
Surprising and good is only: Everything including graphics fixed when clicking my "speedreader" button in Brave. So they are doing that "cool look" by CSS.
A little unfair that this is downvoted. No search is like a dealbreaker for me. I'm happy with iTerm and for 99% of my use cases I don't need a "very fast" terminal. Thanks for pointing this out.
Seems I will wait a little longer before search is in the regular build (and not nightly ones)
Aider is in a sad state. The maintainer does not "maintain" for quite some time now (look at the open PRs and issues). It's not state of the art definitely but one of the first and best ones in the past. A fork was created, Aider CE, from some members of the Discord community https://github.com/dwash96/aider-ce The fork looks and works promising but there is (sadly) so much more development in the other AI CLI tools nowadays.
People are always so fidegty about this stuff — for super understandable reason, to be clear. People not much smarter than anyone else try to reason about numbers that are hard to reason about.
But unless you have the actual numbers, I always find it a bit strange to assume that all people involved, who deal with large amounts of money all the time, lost all ability to reason about this thing. Because right now that would mean at minimum: All the important people at FAANG, all the people at OpenAI/Anthropic, all the investors.
Of course, there is a lot of uncertainty — which, again, is nothing new for these people. It's just a weird thing to assume that.
The point is not whether they are right, but how low the bar is for what constitutes as palatable opinions from bystanders on a topic that other people have devoted a lot of thought and money to.
I just don't think "I don't know anyone who pays for it" or "You know, companies have also failed before" bring enough to the table to be interesting talking points.
I think it's a bit fallacious to imply that the only way we could be in an AI investment bubble is if people are reasoning incorrectly about the thing. Or at least, it's a bit reductive. There are risks associated with AI investment. The important people at FAANG/AI companies are the ones who stand to gain from investments in AI. Therefore it is their job to downplay and minimize the apparent risks in order to maximize potential investment.
Of course at a basic level, if AI is indeed a "bubble", then the investors did not reason correctly. But this situation is more like poker than chess, and you cannot expect that decisions that appear rational are in fact completely accurate.
> All the important people at FAANG, all the people at OpenAI/Anthropic, all the investors.
It's like asking big pharma if medicine should be less regulated, "all the experts agree", well yeah, their paycheck depends on it. Same reason no one at meta tells Zuck that his metaverse is dogshit and no one wants it, they still spent billions on it.
You can't assume everyone is that dumb, but you certainly can assume that the yes men won't say anything other than "yes".
Again, this is not an argument. I am asking: Why do we assume that we know better and people with far more knowledge and insight would all be wrong?
This is not rhetorical question, I am not looking for a rhetorical answer. What is every important decision maker at all these companies missing?
The point is not that they could not all be wrong, they absolutely could. The point is: Make a good argument. Being a general doomsayer when things get very risky might absolutely make you right but it's not a interesting argument – or any argument at all.
I think you have a point and I'm not sure I entirely disagree with you, so take this as lighthearted banter, but:
Coming from the opposite angle, what makes you think these folks have a habit of being right?
VCs are notoriously making lots of parallel bets hoping one pays off.
Companies fail all the time, either completely (eg Yahoo! getting bought for peanuts down from their peak valuation), or at initiatives small and large (Google+, arguably Meta and the metaverse). Industry trends sometimes flop in the short term (3D TVs or just about all crypto).
C-levels, boards, and VCs being wrong is hardly unusual.
I'd say failure is more of a norm than success, so what should convince us it's different this time with the AI frenzy? They wouldn't be investing this much if they were wrong?
The universe is not configured in such a way that trillion dollar companies come into existence without a lot of things going well over long periods of time, so if we accept money as the standard for being right, they are necessarily right, a lot.
Everything ends and companies are no exception. But thinking about the biggest threats is what people in managerial positions in companies do all day, every day. Let's also give some credit to meritocracy and assume that they got into those positions because they are not super bad at their jobs, on average.
So unless you are very specific about the shape of the threat and provide ideas and numbers beyond what is obvious (because those will have been considered), I think it's unlikely and therefor unreasonable to assume that a bystanders evaluation of the situation trumps the judgement of the people making these decisions for a living with all the additional resources and information at any given point.
Here's another way to look at this: Imagine a curious bystander were to judge decisions that you make at your job, while having only partial access to the information that you have to do the job, that you do every day for years. Will this person at some point be right, if we repeat this process often enough? Absolutely. But is it likely, on any single instance? I think not.
> Why do we assume that we know better and people with far more knowledge and insight would all be wrong?
Because of historical precedent. Bitcoin was the future until it wasn't. NFTs and blockchain were the future until they weren't. The Metaverse was the future until it wasn't. Theranos was the future until it wasn't. I don't think LLMs are quite on the same level as those scams, but they smell pretty similar: they're being pushed primarily by sales- and con-men eager to get in on the scam before it collapses. The amount being spent on LLMs right now is way out of line with the usefulness we are getting out of them. Once the bubble pops and the tools have a profitability requirement introduced, I think they'll just be quietly integrated into a few places that make sense and otherwise abandoned. This isn't the world-changing tech it's being made out to be.
You don't have an argument either btw, we're just discussing our points of view.
> Why do we assume that we know better and people with far more knowledge and insight would all be wrong?
Because money and power corrupt the mind, coupled with obvious conflicts of interest. Remember the hype around AR and VR in 2015s ? Nobody gives a shit about it anymore. They wrote articles like "Augmented And Virtual Reality To Hit $150 Billion, Disrupting Mobile By 2020" [0], well, if you look at the numbers today you'll see it's closer to 15b than 150b. Sometimes I feel like I live in a parallel universe... these people have been lying and overpromising things for 10, 15 or 20+ years and people still swallow it because it sounds cool and futuristic.
I'm not saying I know better, I'm just saying you won't find a single independent researcher that will tell you there is a path from LLMs to AGI, and certainly not any independent researcher that will tell you the current numbers a) make sense, b) are sustainable
That loss includes the costs to train the future models.
Like Dario/Anthropic said, every model is highly profitable on it's own, but the company keeps losing money because they always train the next model (which will be highly profitable on it's own).
But even if you remove R&D costs, they’re still billions of dollars short of profitability. That’s not a small hurdle to overcome. And OpenAI has to continue to develop new models to remain relevant.
OpenAI "spent" more on sales/marketing and equity compensation than that:
"Other significant costs included $2 billion spent on sales and marketing, nearly doubling what OpenAI spent on sales and marketing in all of 2024. Though not a cash expense, OpenAI also spent nearly $2.5 billion on stock-based equity compensation in the first six months of 2025"
Of course they will, once they start falling behind not having access to it.
People said the same things about computers (they are just for nerds, I have no use for spreadsheets) and smartphones (I don't need apps/big screen, I just want to make/receive calls).
I use it professionally and I rotate 5 free accounts on all platforms, money doesn't have any values anymore, people will spend $100 a month on LLMs and another $100 on streaming services, that's like half of my household monthly food budget
I'm sure providers will find ways of incorporating the fees into e.g. ISP or mobile network fees so that users end up paying in a less obvious, less direct way.
The cost of serving an "average" user would only fall over time.
Most users rarely make the kind of query that would benefit a lot from the capabilities of GPT-6.1e Pro Thinking With Advanced Reasoning, Extended Context And Black Magic Cross Context Adaptive Learning Voodoo That We Didn't Want To Release To Public Yet But If We Didn't Then Anthropic Would Surely Do It First.
And the users that have this kind of demanding workloads? They'd be much more willing to pay up for the bleeding edge performance.
AI companies don't have a plausible path to productivity because they are trying to create a market while model is not scalable unlike different services that have done this in the past. (DoorDash, Uber, Neftlix etc.)
I only went through it because I got once 300 USD for free to spend on my Google Workspace account I/my business owns.
OpenAI API usage is so much easier.
Btw Google: Fix Google Console API usage dashboard... why is there a delay of 2+ days? Why cannot I see (and block!) the usage of the current day?
reply