As an AMD user, I have to say that it's nowhere near nvidia for ML.
Basically ALL ML tooling is written with CUDA as a target, and while ROCm promises some level of compatibility, it's a buggy mess that crashes with most models out there.
If OpenCL took off as the standard, any GPU would be able to do everything just fine, but unfortunately industry always takes the "worse is better" approach to everything.
And like politics, it's a bunch of made up shit, that when believed by an entire society, will determine how, and whether, you can or cannot live in it.
Humans are defined by making shit up and then believing it.
You only think politics is a bunch of made up shit because a bunch of powerful people have propagandized you into that opinion. They believe politics is real and it is the sad character of the 21st century that the average person thinks it is spectacle.
JIT is a compilation model for interpreted languages, which translates bytecode into machine code in runtime to obtain better performance. The opposite of JIT is AOT (Ahead of Time). JIT languages go through two compilation steps: one to bytecode, and another to native code in runtime. Java is one example of a JIT compiled interpreted language, that also does ahead of time compilation to bytecode, while python compiles to bytecode transparently, but does not do any JIT compilation to native, which is part of the reason it's considered a "slow language" (though this expression doesn't really make any sense, as slowness is a property of the implementation, not of the language).
You started using Javascript after they fixed it with ES6. When I started working with Python, I had just left a NodeJS project in ES5 callback + prototype hell and didn't want to get anywhere near Javascript until they added proper scoping rules and classes.