Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

So this confirms a best-in-class model release within the next few days?

From a strategic perspective, I can't think of any reason they'd release this unless they were about to announce something which totally eclipses it?



Even without an imminent release it's a good strategy. They're getting pressure from Qwen and other high performing open-weight models. without a horse in the race they could fall behind in an entire segment.

There's future opportunity in licensing, tech support, agents, or even simply to dominate and eliminate. Not to mention brand awareness, If you like these you might be more likely to approach their brand for larger models.



GPT-5 coming Thursday.


Is this the stealth models horizon alpha and beta? I was generally impressed with them(although I really only used it in chats rather than any code tasks). In terms of chat I increasingly see very little difference between the current SOTA closed models and their open weight counterparts.


Their tokenization suggests they're new Qwen models AFAIK. They tokenize input to the exact same # of tokens that Qwen models do.


How much hype do we anticipate with the release of GPT-5 or whichever name to be included? And how many new features?


Excited to have to send them a copy of my drivers license to try and use it. That’ll take the hype down a notch.


Imagine if it's called GPT-4.5o


Even before today, the last week or so, it's been clear for a couple reasons, that GPT-5's release was imminent.


Undoubtedly. It would otherwise reduce the perceived value of their current product offering.

The question is how much better the new model(s) will need to be on the metrics given here to feel comfortable making these available.

Despite the loss of face for lack of open model releases, I do not think that was a big enough problem t undercut commercial offerings.


> I can't think of any reason they'd release this unless they were about to announce something which totally eclipses it

Given it's only around 5 billion active params it shouldn't be a competitor to o3 or any of the other SOTA models, given the top Deepseek and Qwen models have around 30 billion active params. Unless OpenAI somehow found a way to make a model with 5 billion active params perform as well as one with 4-8 times more.


You hit the nail on the head!!!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: