OK, that is interesting. Separating infra from AI valuation. I can see what you mean though because stock prices are volatile and unpredictable but a datacenter will remain in place even if its owner goes bankrupt.
However, I think the AI datacenter craze is definitely going to experience a shift. GPU chips get obsolete really fast, especially now that we are moving into specialised neural chips. All those datacenters with thousands of GPUs will be outcompeted by datacenters with 1/4th the power demand and 1/10th the physical footprint due to improved efficiency within a few years. And if indeed the valuation collapses and investors pull out of these companies, where are these datacenters supposed to go? Would you but a datacenter chock full of obsolete chips?
Right, the obsolence rate of GPUs is one of the primary drivers of the depreciation shenanigans aspect of the bubble.
However, I've come across a number of articles that paint a very different picture. E.g. this one is from someone in the GPU farm industry and is clearly going to be biased, but by the same token seems to be more knowledgeable. They claim that the demand is so high that even 9-year old generations still get booked like hot cakes: https://www.whitefiber.com/blog/understanding-gpu-lifecycle
> They claim that the demand is so high that even 9-year old generations still get booked like hot cakes
What does this prove? Demand is inflated in a bubble. If the AI company valuation bubble pops, demand for obsolete GPUs will evaporate.
The article you're linking here doesn't say what percentage of those 9-year-old GPUs already failed, nor does it say when they were first deployed, so it's hard to draw conclusions. In fact their math doesn't seem to consider failure at all, which is highly suspicious.
In another subthread, you pointed to the top comment here about a 5-year MTBF as supposedly contradicting the original article's thesis about depreciation. 5 years is obviously less than the 9 years here, so clearly something doesn't add up. (Besides, a 5-year MTBF is rather poor to begin with, and there isn't normally a correlation between depreciation and MTBF. So this is not a smoking gun which contradicts anything in Tim Bray's original article.)
Is it? The dot-com fiber bubble for instance was famous for laying far more fiber than would be needed for the next decade even as the immediate organic demand was tiny.
In this case however, each and every hyperscaler is bemoaning / low-key boasting that they have been capacity constrained for the past multiple quarters.
The other data point is the climbing rate of AI adoption as reported by non-AI affiliated sources, which also lines up with what AI companies report, like:
That article is a little crazy. Not only are 54% of Americans using AI, that is 10 percentage points over last year... and usage at work may even be boosting national-level metrics!
> In fact their math doesn't seem to consider failure at all, which is highly suspicious.
That's a good point! If I had to guess, that may be because Burry et al don't mention failure rates either, and seem to assume a ~2 year obsolescence based on releases of new generations of GPUs.
As such, everybody is responding to those claims. The article I linked was making the point that even 9-year old generations are still in high demand, which also explains the 5 years vs 9 years difference -- two entirely different generations and models, H100 vs M4000.
And while MTBF is not directly related to depreciation, it's Bray who brings up failure rates in a discussion about depreciation. This is one reason I think he's just riffing off what he's heard rather than speaking from deep industry knowledge.
I've been trying to find any discussion that mentions concrete failure rates without luck. Which makes sense, since they're probably heavily-NDA'd numbers.
Yes, demand is absolutely inflated in a bubble. We're talking about GPUs, so look at hardware sales for the comparison, not utility infrastructure. Sun Microsystems' revenue during and after the dotcom bubble, for example. Or Cisco's, for a less extreme but still informative case.
> it's Bray who brings up failure rates in a discussion about depreciation
Yes, I understood his point to be that depreciation schedules for GPUs are overly optimistic (too long) while their MTBF is unusually low. Implying what is on the books as assets may be inflated compared to previous normal practices in tech.
In any case, at this point I agree with the other commenter who said you're just trying to confirm your existing opinion, so not really much sense in continuing this discussion.
However, I think the AI datacenter craze is definitely going to experience a shift. GPU chips get obsolete really fast, especially now that we are moving into specialised neural chips. All those datacenters with thousands of GPUs will be outcompeted by datacenters with 1/4th the power demand and 1/10th the physical footprint due to improved efficiency within a few years. And if indeed the valuation collapses and investors pull out of these companies, where are these datacenters supposed to go? Would you but a datacenter chock full of obsolete chips?