Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
|
from
login
Latest MLX Release Includes Jaccl RDMA Back End over TB5
(
twitter.com/awnihannun
)
2 points
by
geerlingguy
4 days ago
|
past
|
discuss
Kimi K2 1T model runs on 2 512GB M3 Ultras
(
twitter.com/awnihannun
)
233 points
by
jeudesprits
8 days ago
|
past
|
121 comments
Transformers are almost adversarially designed for computer memory hierarchy
(
twitter.com/awnihannun
)
4 points
by
tosh
4 months ago
|
past
DeepSeek R1 671B running on 2 M2 Ultras faster than reading speed
(
twitter.com/awnihannun
)
96 points
by
thyrox
10 months ago
|
past
|
29 comments
Apple MLX was open sourced one year ago
(
twitter.com/awnihannun
)
3 points
by
amrrs
on Dec 5, 2024
|
past
MLX 0.11: faster generation across model sizes and machines
(
twitter.com/awnihannun
)
3 points
by
tosh
on April 20, 2024
|
past
With the latest MLX, 4-bit Llama 3 8B runs nicely on an 8GB M2 mini
(
twitter.com/awnihannun
)
2 points
by
mariuz
on April 19, 2024
|
past
100 tokens/s, 4-bit Mistral 7B in MLX on M2 Ultra (faster than llama.cpp)
(
twitter.com/awnihannun
)
3 points
by
tosh
on April 4, 2024
|
past
Apple is hiring GPU kernel engineers for the MLX project
(
twitter.com/awnihannun
)
4 points
by
behnamoh
on Jan 24, 2024
|
past
Mistral 7B 4-bit quantization runs no problem on an 8GB M2
(
twitter.com/awnihannun
)
20 points
by
tosh
on Dec 22, 2023
|
past
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: