Yes, but now that your CPU utilization is uncapped, you can more easily scale the utilization down and retain some form of proportional performance, so it doesn't matter. If you capped the system to 60% of your CPU, it might change the overall numbers, but say you're doing 1.8x more TPS at the same usage, it's a win either way. It's not a marketing trick; those numbers come across as "Very good", to me.
If Expensive Server CPU = X dollars per unit, and it's only used at 60% capacity and can realistically only be used at that capacity, then you have effectively just set .4*X amount of dollars on fire, per unit. If you can vertically take a workload and scale it to saturate 90% of a machine, it's generally easy to apply QOS and other isolation techniques to achieve lower saturation and retain some proportional level of performance. The reverse is not true: if you can only hit 60% of your total machine saturation before you need to scale out, then the only way to get to 90% or higher saturation is through a redesign. Which is exactly what has happened here.
If Expensive Server CPU = X dollars per unit, and it's only used at 60% capacity and can realistically only be used at that capacity, then you have effectively just set .4*X amount of dollars on fire, per unit. If you can vertically take a workload and scale it to saturate 90% of a machine, it's generally easy to apply QOS and other isolation techniques to achieve lower saturation and retain some proportional level of performance. The reverse is not true: if you can only hit 60% of your total machine saturation before you need to scale out, then the only way to get to 90% or higher saturation is through a redesign. Which is exactly what has happened here.