Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
leopoldj
69 days ago
|
parent
|
context
|
favorite
| on:
A beginner's guide to deploying LLMs with AMD on W...
The blog doesn't verify if the code is actually using the GPU. The code will work perfectly fine on CPU, albeit slowly. You should run this to be sure:
python -c "import torch; print(torch.cuda.is_available())"
Strange that torch.cuda.is_available() is used for AMD also.
Use rocm-smi to be double sure.
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search:
Use rocm-smi to be double sure.