Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There is a lot of bubbliness sure but some of the rhetoric is a bit sloppy. Like "swapping money back and forth" arguments is literally was economies and specializing results in.

The debt securitization could be an issue but one thing that stands out to me is the if GPUs are really being used as the lein or collateral, these are fundamentally depreciating assets and are marked as such even if the depreciation rates are slightly wrong.

Any new tech that renders current build out could dramatically hit this loans of course.



>if GPUs are really being used as the lein or collateral, these are fundamentally depreciating assets and are marked as such even if the depreciation rates are slightly wrong.

As long as the initial value of a blackwell chip derived from correctly forecast and discounted cash flow is sound, this is correct. On the other hand, if the initial value is a function of scarcity/demand and high manufacturing cost that's wildly detatched from actual returns then 'could be an issue' is a significant understatement.


> even if the depreciation rates are slightly wrong.

The TFA cites a linked study which states "CoreWeave, for example, depreciates its GPUs over six years" which is way more than 'slightly wrong'. Just mapping that backward, 2020's hot new data center GPU was the A100 and they are just reaching their 5th year of service. How many large customers are lining up to pay top dollar to rent one of those 5 year-olds for the next 12 months? For most current workloads I think A100s are already net negative to keep operating in terms of opportunity cost. That power, cooling and rack space are more profitably allocated toward 2023's now mid-life H100 GPUs.

The rate of data center GPU progress has accelerated significantly in the last five years. I hardly know anything about AI workloads but even I know that newer GPU capabilities like FP8 are recent discoveries which can deflate the value of older GPUs almost overnight. With everyone now hunting for those optimization shortcuts, it's foolish to think more won't be discovered soon. The odds that this year's newly installed H200 GPUs will keep generating significant rental fees for 72 months are, IMHO, vanishingly small. Over a trillion dollars of loans have been secured by assets actually worth maybe half the claimed value. It's like 2009 sub-prime mortgages all over again.


> How many large customers are lining up to pay top dollar to rent one of those 5 year-olds for the next 12 months?

If it's depreciated over 6 years then at 5 years it's valued at 17% of its initial price. That seems kind of reasonable?


Yes, I think it's worth getting details ... like my mental model is that everything within 2x is sort of reasonable error. Looking for 10x errors and cliff edges like in the 2007 crisis where I think a good anecdote is like default prob assumptions being 2% and then realized to 30% (15x).

Is 15x error in realized GPU + the debt AFTER INFLATION? I suppose but feels less likely except in some tail scenarios that have other interesting properties.

This doesn't mean that there isn't a significant possibility of market correction due to other factors but the GPU factor just seems medium sized compared to other scenarios historically. Am I missing anything in the 1st order thinking?


Insane question, asked for the purposes of discussion: Would it make sense if those GPUs were top-of-the-line for years? Like if TSMC were destroyed?

Even then, I don't understand why being a landlord to the place were AI is trained would be financially exciting... Wouldn't investing in NVIDIA make a lot more sense?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: