Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's not all, my macbook (48 GM VRAM) can run better local LLMs at a workable speed than my RTX 5090 rig can, plus Apple has MLX and neural engines.

The reason there was such a narrative is because Wall Street and Silicon Valley are both narrative machines with little regard for veracity, and they are also not that smart (at least according to people who successfully beat their system, such as Buffett).

"Warren, if people weren't so often wrong, we wouldn't be so rich." – the late great Charlie Munger.





yeah tbh it sometimes feels like a lot of moaning from those crowds is more about self-validation than anything concrete

That's pretty cool! What are the advantages of using a local LLM currently? Do you tune them? I suppose it will be more enshittification proof..

> What are the advantages of using a local LLM currently?

You don't have to send all your thoughts to a third party. That's the advantage.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: