Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Neat! I'm already using openwebui/ollama with a 7900 xtx but the STT and TTS parts don't seem to work with it yet:

2025-05-05 20:53:15,808] [WARNING] [real_accelerator.py:194:get_accelerator] Setting accelerator to CPU. If you have GPU or other accelerator, we were unable to detect it.

Error loading model for checkpoint ./models/Lasinya: This op had not been implemented on CPU backend.



I've given up trying to locally use LLMs on AMD


Basically anything llama.cpp (Vulkan backend) should work out of the box w/o much fuss (LM Studio, Ollama, etc).

The HIP backend can have a big prefill speed boost on some architectures (high-end RDNA3 for example). For everything else, I keep notes here: https://llm-tracker.info/howto/AMD-GPUs




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: