At least for me Claude Code is still working on my Pro plan. I don't know if that's because the change simply hasn't propagated all the way through their systems yet (the change is now up on the main Claude pricing page and on their support pages, but not on the Claude Code landing page yet), or if it's because existing plans are grandfathered in, or what.
In general Anthropic seems to be pretty bad at clearly communicating what is going on. I have both Claude Pro for Claude Code and ChatGPT Plus for Codex, and lately I've been reaching for Codex first more and more often... at least for the hobby stuff I'm using Claude/Codex on, they seem pretty much equivalent in terms of practical capability/usefulness.
I'm really hopeful about John Ternus stepping into the CEO role. Pretty much everything he's done leading Apple's hardware engineering has been an enormous unqualified success, and for a company like Apple, having hardware lead the company seems like the right step.
No, MLX is nothing like a Cuda translation layer at all. It’d be more accurate to describe MLX as a NumPy translation layer; it lets you write high level code dealing with NumPy style arrays and under the hood will use a Metal GPU or CUDA GPU for execution. It doesn’t translate existing CUDA code to run on non-CUDA devices.
That is not just your opinion, that is the opinion of multiple United States Court of Appeals circuits in many many cases, and by its declining to overturn these cases, that is also the opinion of the United States Supreme Court. The United States is a common law country, so really what that means is that your opinion is actually not an opinion at all; you have simply stated the established law of the land.
Yeah, it used to be true that server GPUs at least somewhat resembled their gaming counterparts (i.e. Nvidia Tesla server components from 12+ years ago); they were still PCIe cards, just with server-optimized coolers, and fundamentally shared the same dies that the gaming and professional cards used.
That stopped being true many years ago though, and the divergence has only accelerated with the advent of AI datacenter usage. The form factor is now fundamentally different (SXM instead of PCIe); you can adapt an SXM card to PCIe with some effort [1], but that may not even be worthwhile because 1. the power and cooling requirements for the SXM cards are radically different than a desktop part and more importantly 2. the dies are no longer even close to being the same. IIRC, Blackwell AI chips straight up don't have rasterization hardware onboard at all; internally they look like a moderate number of general SMs attached to a huge number of tensor core. Modern AI GPUs are fundamentally optimized for, well, mat-mults, which is not at all what you want for gaming or really any non-AI application.
One I like is Tom Macwright’s blog [1], which somewhat famously loads insanely fast thanks to having a sort of the web equivalent of a brutalist design while still looking nice [2].
In general Anthropic seems to be pretty bad at clearly communicating what is going on. I have both Claude Pro for Claude Code and ChatGPT Plus for Codex, and lately I've been reaching for Codex first more and more often... at least for the hobby stuff I'm using Claude/Codex on, they seem pretty much equivalent in terms of practical capability/usefulness.
reply