Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

H100 are going for about $3/hr, 384243 ~ $28k


This is indeed a reasonable cost estimate for competitive short-term H100 rentals (source: much SemiAnalysis coverage, and my own exploration of the market), but there is a critical error (besides the formatting glitch with `*`):

It was 24 days (576 hours) not 24 hours. $663,552 @ $3/hr.


According to Runpod pricing page, you can run H100 for $2.39, it can go as lower as $528,629.76

WARNING: This is highly speculative and napkin math

H200 (141 GB HBM3 - $3.99/h - 1.4x perf) 216 x 24 x 17 = 88128h = 351.895,104 (17 days and 216 cards)

B200 (192 GB HBM3e - $5.99/h - 2.8x perf) 158 x 24 x 9 = 34128h = $204.426,72

Probably wrong math, should be more efficient and cheaper. Doubt that they have 100/200 cards available for that long.

Source: I've only trained using RTX4090 and stuff like that with 8 cards.

Not affiliated in any way with Runpod.


Take this brother, \*, it may serve you well


Runpod is worth a look for these on demand workloads https://www.runpod.io/pricing I use a lot for ffmpeg workloads.

Found this a few days ago which might be neat for finding cheaper https://www.primeintellect.ai/

No affiliation with either


Adding one more that's worth a look https://www.shadeform.ai


You can buy for $2.2/GPU/hr for on-demand and likely around $2 for this big order.

[1]: https://datacrunch.io/products#H100


You can go much lower: https://gpulist.ai/


The price just keeps on dropping with each comment. Anyone going to estimate it for less?

What's the source for $3/h?


They miscalculated only 24 hours, not 24 days, so their number is off by a factor of 24.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: