Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
woadwarrior01
on March 23, 2024
|
parent
|
context
|
favorite
| on:
Emad Mostaque resigned as CEO of Stability AI
Indeed! Also, Mixtral 8x7b runs just as well on older M1 Max and M2 Max Macs, since LLM inference is memory bandwidth bound and memory bandwidth hasn't significantly changed between M1 and M3.
karolist
on March 23, 2024
[–]
It didn't change at all, rather was reduced in certain configurations.
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: