Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There's nothing new here in terms of architecture. Whatever secret sauce is in the training.


Part of the secret sauce since O1 has been accesss the real reasoning traces, not the summaries.

If you even glance at the model card you'll see this was trained on the same CoT RL pipeline as O3, and it shows in using the model: this is the most coherent and structured CoT of any open model so far.

Having full access to a model trained on that pipeline is valuable to anyone doing post-training, even if it's just to observe, but especially if you use it as cold start data for your own training.


Its CoT is sadly closer to that sanitised o3 summaries than to R1 style traces.


It has both raw and summarized traces.


I mean raw GPT-OSS is close to summarised o3.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: