A really interesting point that keeps coming up in discussions about LLMs is “what trade-offs need to be re-evaluated”
> I also believe that observability is up for grabs again. We now have both the need and opportunity to take advantage of it on a whole new level. Most people were not in a position where they could build their own eBPF programs, but LLMs can
One of my big predictions for ‘26 is the industry following through with this line of reasoning. It’s now possible to quickly code up OSS projects of much higher utility and depth.
LLMs are already great at Unix tools; a small api and codebase that does something interesting.
I think we’ll see an explosion of small tools (and Skills wrapping their use) for more sophisticated roles like DevOps, and meta-Skills for how to build your own skill bundles for your internal systems and architecture.
And perhaps more ambitiously, I think services like Datadog will need to change their APIs or risk being disrupted; in the short term nobody is going to be able to move fast enough inside a walled garden to keep up with the velocity the Claude + Unix tools will provide.
UI tooling is nice, but it’s not optimized for agents.
I don't think using an AI company that relies on selling AI solutions to make money as a unbiased source of info, but what do I know? I'm not a VC investor and skeptical of the rich + elites in general.
Usually the best rule of thumb is to be against anything these people are for.
Ok, I can see why they wouldn't list a copilot related stuff since that page is just providing a list of public Amp threads, some of which have public repos. Copilot is a different framework so you wouldn't find it on that page. I linked to the Amp threads because you can see both the conversation as well as the repo for those that include one.
> I also believe that observability is up for grabs again. We now have both the need and opportunity to take advantage of it on a whole new level. Most people were not in a position where they could build their own eBPF programs, but LLMs can
One of my big predictions for ‘26 is the industry following through with this line of reasoning. It’s now possible to quickly code up OSS projects of much higher utility and depth.
LLMs are already great at Unix tools; a small api and codebase that does something interesting.
I think we’ll see an explosion of small tools (and Skills wrapping their use) for more sophisticated roles like DevOps, and meta-Skills for how to build your own skill bundles for your internal systems and architecture.
And perhaps more ambitiously, I think services like Datadog will need to change their APIs or risk being disrupted; in the short term nobody is going to be able to move fast enough inside a walled garden to keep up with the velocity the Claude + Unix tools will provide.
UI tooling is nice, but it’s not optimized for agents.