Selecting your own mute words for your timeline is the best part about Twitter. The algorithm changes have been pretty bad in the last few weeks though.
FYI the “mute by keyword” feature exists both on Bluesky and Mastodon and I use it extensively. I don’t use Threads but a quick search tells me it’s available there too.
Now you know, and you don’t need to use twitter anymore!
It's known that Anthropic's $20 Pro subscription is a gateway plan to their $100 Max subscription, since you'll easily burn your token rate on a single prompt or two. Meanwhile, I've had ample usage testing out Codex on the basic $20 ChatGPT Plus plan without a problem.
As for Anthropic's $100 Max subscription, it's almost always better to start new sessions for tasks since a long conversation will burn your 5-hour usage limit with just a few prompts (assuming they read many files). It's also best to start planning first with Claude, providing line numbers and exact file paths prior, and drilling down the requirements before you start any implementation.
> It's known that Anthropic's $20 Pro subscription is a gateway plan to their $100 Max subscription, since you'll easily burn your token rate on a single prompt or two.
I genuinely have no idea what people mean when I read this kind of thing. Are you abusing the word "prompt" to mean "conversation"? Or are you providing a huge prompt that is meant to spawn 10 subagents and write multiple new full-stack features in one go?
For most users, the $20 Pro subscription, when used with Opus, does not hit the 5-hour limit on "a single prompt or two", i.e. 1-2 user messages.
Today I literally gave Claude a single prompt, asking it to make a plan to implement a relatively simple feature that spanned a couple
different codebases. It churned for a long time, I asked a couple very simple
follow up questions, and then I was out of tokens. I do not consider myself to be any kind of power user at all.
The only time I've ever seen this happen is when you give it a massive codebase, without any meaningful CLAUDE.md to help make sense of it and no explicitly @ mentioning of files/folders to guide, and then ask it for something with huge cross-cutting.
> spanned a couple different codebases
There you go.
If you're looking to prevent this issue I really recommend you set up a number of AGENTS.md files, at least top-level and potentially nested ones for huge, sprawling subfolders. As well as @ mentioning the most relevant 2-3 things, even if it's folder level rather than file.
Not just for Claude, it greatly increases speed and reduces context rot for any model if they have to search less and more quickly understand where things live and how they work together.
I have a tool that scans all code files in a repo and prints the symbols (AST based), it makes orienting around easy, it can be scoped to a file or folder.
I am on $100 max subscription, and I rarely hit the limit, I used to but not anymore, but then again, I stopped building two products at the same time and concentrate to finish up the first/"easiest" one.
> you'll easily burn your token rate on a single prompt or two
My experience has been that I can usually work for a few hours before hitting a rate limit on the $20 subscription. My work time does not frequently overlap with core business hours in PDT, however. I wonder whether there is an aspect of this that is based on real-time dynamic usage.
I’m not surprised they closed the loophole, it always felt a little hacky using an Anthropic monthly sub as an API with a spoofed prompt (“You are Claude Code, Anthropic's official CLI for Claude”) with OpenCode.
Google will probably close off their Antigravity models to 3P tools as well.
It’s incredibly impressive to see a large company with over 30x as many employees (or 2x if you compare with GDM) than OAI step back into the AI race compared to where they were with Bard a few years ago.
Google has proved it doesn’t want to be the next IBM or Microsoft.
Why are people so surprised? Attention Is All You Need was authored by Googlers. It’s not like they were blindsided.. OpenAI prouctionized it first but it didn’t make sense to count Google out given their AI history?
Huh? They absolutely were blindsided and the evidence is there. No one expected ChatGPT to take off like it did, not even OpenAI. Google put out some embarrassing products out for the first couple of years, called a code red internally, asked Sergey and Larry to come back. The fact that they recovered doesn’t mean they weren’t initially blindsided.
People are surprised because Google released multiple surprisingly bad products and it was starting to look like they had lost their edge. It’s rare for a company their size to make such a big turnaround so quickly.
Actually Microsoft has also shown it doesn't want to be the next IBM. I think at this point Apple is the one where I have trouble seeing a long-term plan.
It probably depends on what "The next IBM" means for people. Microsoft is so deeply embedded into companies right now that for larger cooperation it's practically impossible to get rid of them, and their cloud-driven strategy is very profitable.
You should compare the number of top AI scientists each company has. I think those numbers are comparable (I’m guessing each has a couple of dozen). Also how attractive each company is to the best young researchers.
We're talking about code generation here but most people's interactions with LLMs are through text. On that metric Google has led OpenAI for over a year now. Even Grok in "thinking" mode leads OpenAI
Better yet, setup transaction alerts on all your credit cards, and use a budgeting app like Monarch/YNAB to review all your household transactions each month or receive weekly email summaries.
The subway systems is one of the greatest socioeconomic equalizers in NYC. During rush hour, you'll share a subway car with a homeless man, an ER doctor wearing scrubs, a fashion model wearing YSL, a finance bro, and a food delivery worker. It's an amazing city for people watching.
https://x.com/robzolkos/status/2024125323755884919?s=46
reply