GLM-4.5 seems to outperform it on TauBench, too. And it's suspicious OAI is not sharing numbers for quite a few useful benchmarks (nothing related to coding, for example).
One positive thing I see is the number of parameters and size --- it will provide more economical inference than current open source SOTA.
One positive thing I see is the number of parameters and size --- it will provide more economical inference than current open source SOTA.