Intel used to be undisputed leaders in chip fabrication. They've always been hit or miss on chip design. Most notably, every time they've tried to develop two CPU microarchitectures in parallel, at least one of them has been a major failure. And most of the times AMD has been able to take the lead, it's been because Intel's chip design failed so hard it squandered their fab advantage and gave AMD an opportunity to catch up.
Since Intel's early designs for a discrete GPU were based off introducing a third x86 microarchitecture, the failure was not really surprising.
> it's been because Intel's chip design failed so hard it squandered their fab advantage and gave AMD an opportunity to catch up.
Pretty sure you got that backwards. Intel fell behind because their fab advantage dissipated as they struggled on 14nm for 4 generations longer than their timelines anticipated, their chip design was actually doing alright at the time (without the foreknowledge of Spectre, of course).
In addition, AMD and Intel went pretty tit-for-tat all the way through 90nm; with AMD usually having the superior per-node technology (e.g. AMD 90nm > Intel 90nm) and Intel usually being slightly ahead in node-size. Ironically, similar to Intel and Samsung/TSMC/etc now, in reverse. That didn't really fall apart until 65nm, and really crash until 45nm.
I may simply have been considering a longer period of history than you. I was counting Itanium and later Pentium 4 among Intel's major failures in microarchitecture design, and consider early Opteron to be AMD's most significant and sustained time in the lead prior to their Zen renaissance.
Intel being stuck on 14nm for so long was basically a single sustained fab failure, but their inability to make any significant improvements to the Skylake CPU core or deliver an improved iGPU during those years also illustrates some severe problems on the chip design side of the company: they're too trusting of what their fab teams promise, and their chip designs are too closely tied to a particular fab process preventing them from porting their designs to a working process when things go wrong with their fab plans.
> So all of a sudden, as Warren Buffet says, “You don’t know who’s swimming naked until the tide goes out.” When the tide went out with the process technology, and hey, we were swimming naked, our designs were not competitive. So all of a sudden we realized, “Huh, the rising tide ain’t saving us. We don’t have leadership architecture anymore.” And you saw the exposure.
Do you think NVidia is the world leader (by a wide margin) in GPU market share because their hardware or software is so great? It seems like no one can displace CUDA at this point.
I don't know about the hardware, but one factor about the software is that Nvidia generally pushes their proprietary software solutions, while AMD favors more open source and open standards. This has likely helped Nvidia a lot. People sometimes confuse this, but open source being good overall doesn't mean that it is good for the business of an individual company. Just look at Apple and iMessage vs RCS. Proprietary tech can be used to push out competitors.
Both? I’m leaning a bit more towards HW though. CUDA isn’t exactly that exceptionally hard to replicate (if you’re willing to spend billions of dollars anyway)