Who is going to build the architecture and compile the device specific kernels? You have to pay those people as well and you can save tons of money and time if you do it with cuda.
There are only like 3 AI building companies who have the tech capability and resources to afford that and 2 of them don't even offer their chips to others or have gone back to Nvidia. The rest is manufacturers desperately trying to get a piece of the pie.
If they need a ChatBot that uses a model with same accuracy and performance as on non-CUDA hardware, would they still want CUDA based hardware?