They haven't really. One of their latest blog posts is about how to retrofit the "skills" approach to MCP[0], which makes sense, as the "skills" approach doesn't itself come with solutions for dynamic tool discovery/registration.
The central claim, or "Universal Weight Subspace Hypothesis," is that deep neural networks, even when trained on completely different tasks (like image recognition vs. text generation) and starting from different random conditions, tend to converge to a remarkably similar, low-dimensional "subspace" in their massive set of weights.
almost feel like OpenAI's recent "fall" is a decoy setup by them intentionally.. something's cooking.. maybe they wanted to buy back their own shares at a lower price?
They have SOTA models from OpenAI and Anthropic and Google and you can access them at a 5.5% premium. What you get is the ability to seamlessly switch between them. And also when one is down you can instantly switch to another. Whether that is valuable to you or not is use case dependent. But it isn’t without value.
What it does have I think is a problem that TaskRabbit had: you can hire a house cleaner through TR but once you find a good one you can just work directly with them and save the middleman fee. So OR is great for experimenting with a ton of models to see what is the cheapest one that still performs the tasks you need but then you no longer need OR unless it is for reliability.
reply