Hacker Newsnew | past | comments | ask | show | jobs | submit | journal's commentslogin

When I'm in the mood to code I'd pay for API request to guarantee I get a response right away. No subscription usage limit is holding me back from making progress. Agents don't work. You have to compose your own context, which means you need to send the raw request. Not have assistant figure out. They are using multiple requests but every request is the end of each conversation. People have no idea how to use these things. That's why there's such gamut of disagreement about how useful this is, because less than 1% of people can benefit from using LLMs, and it's only programmers. So they'll keep increasing prices until only few people willing to pay for it. This is more useful than solidworks, and solidworks is $4000. So this will eventually reach and exceed $10000/year. Only programmers will be able to afford it. Everyone who is now using it for search and entertainment will be out of luck.

I can prove it will happen earlier but no one will believe me.

Agents and assistants is like buying insurance. You need to pay per token.

Live long enough and eventually it will suck.

Why drain resources training more controllers when we're having energy collapse? Even if they start pumping oil, it will only delay the inevitable. What would we do with all the extra controllers if we have to fire them in ten years anyway?

Microsoft is on track to be judged for digital genocide.

am i the only one whose API usage is cheaper than monthly subscription?

I too get stressed out when I'm in over my head.

Trump says something the other side will say is false in one hour. So make all the money before the other side responds and markets correct? What happens when Trump makes the wrong move? Will his family be in debt for eternity?

ChatGPT wasn't even how this technology was meant to be used. They simply didn't know how to package it. You can tell from all these models have structured response output. It was probably made for structured response but became useful in other ways. Using ChatGPT is equivalent of driving a golf cart on the highway. Most people are not smart enough to use it. ChatGPT doesn't even tell you how far up into the context it will still consider the text, so you're always at risk it might forget previous information when you go beyond the first response, there is simply no visibility or guarantee. I call it, fog-of-context. GPT5.4 is $15. GPT5.4-pro is $180, how will the next model be priced? What can pro model do that you can't do with any other model and why did they remove cached pricing? Don't get me started about the input. We shouldn't even be paying for that. They should be happy some of us continue to use OpenAI despite other's opinions. Sam Altman should be kissing us, not them.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: