I agree it is behind - but usually only a few days.
I'm a big fan of the VS Code add-in. Despite the current narrative that IDEs are dead, I find the ability to look at multiple things at once is works much better in some kind of.. GUI editing tool.. than just using a terminal.
The bottleneck is quality. Underlying AI models aren't good enough for fully autonomous systems. Every task I assign to Claude, I have to review and steer it in a certain direction. Until underlying models get better, all these "teams of AI coworkers" will not work.
Did they make significant improvements in OCR 3? The quality I was getting from Mistral OCR 2 was nowhere near as good as what I could get from just sending the same files to Claude Sonnet via an API call.
Too late to edit / update my comment, but I finally tried Mistral OCR 3 tonight on a PDF file I had. Results were good, and fast... but I actually got better quality output from sending it to Haiku 4.5 instead.
In particular, Haiku 4.5 detected some footers that were on every page and moved them to be the footer at the end of the entire document instead, so that the document read more fluently.
I imagine Mistral OCR 3 might have an edge on speed & pricing, but in my low volume / prioritizing-quality case, seems that Claude is still better than Mistral.
I built a lightweight (<1mb) chrome extension (with over 600,000 downloads) that lets you chat with page, draft emails and messages, fix grammar, translate, summarize page, etc.. You can use models from OpenAl, Google, and Anthropic.
For those who don't want to switch to AI browsers, I built a chrome extension that lets you chat with page, draft emails and messages, fix grammar, translate, summarize page, etc.
You can use models not just from OpenAI but also from Google and Anthropic.
reply