I mostly agree with you, especially on software design being underappreciated. A lot of what slows teams down today isn’t typing code, it’s reasoning about systems that have accreted over time. I am thinking about implicit contracts, historical decisions, and constraints that live more in people’s heads than in the code itself.
Where I’d push back slightly is on framing this primarily as an LLM limitation. I don’t expect models to reason from first principles about entire systems, and I don’t think that’s what’s missing right now. The bigger gap I see is that we haven’t externalised design knowledge in a way that’s actionable.
We still rely on humans to reconstruct intent, boundaries, and "how work flows" every time they touch a part of the system. That reconstruction cost dominates, regardless of whether a human or an AI is writing the code.
I also don’t think small teams move faster because they’re shipping lower-quality or more experimental software (though that can be true). They move faster because the design surface is smaller and the work routing is clear. In large systems, the problem isn’t that AI can’t design; it’s that neither humans nor AI are given the right abstractions to work with.
Until we fix that, AI will mostly amplify what already exists: good flow in small systems, and friction in large ones.
Good points. Design has a higher amount of creativity than the implementation based on specs, and AI is missing something that hampers its creativity, if it even has anything analogous to it.
I suspect this is also related to agency, and why we need to spell things out in the prompt and run multiple agents in a loop, not to mention the MoE and CoT, all of which would not be needed if the model could sustain a single prompt until it is finished, creating its own subgoals and reevaluating accordingly. Agency requires creativity and right now that's the human part, whether it's judging the output or orchestration of models.
I've been thinking about why AI seems to accelerate some teams dramatically while leaving others mostly unchanged. This post is an attempt to articulate what I think is missing: not better tools, but better routing of work, context, and ownership. Curious how this resonates (or doesn't) with others.
This news brought joy back to my life. I live in Berlin and often rent these Miles car share cars. The ID3 is an absolute joy to drive, way better than the Tesla's out there. But for me, those touch buttons are an actual nuisance.
Thank you for your insightful response! I completely agree that effective human-AI collaboration hinges on a robust interface that enables meaningful two-way communication. Whether through sensory-motor interactions or more abstract cognitive interfaces, achieving a truly symbiotic relationship is crucial.
The question of what humans can offer to AI—especially in the context of autonomous, self-aware AGI—is an important one. While today’s AI doesn’t yet necessitate human input in the way a biological symbiont might, there’s still immense value in human intuition, ethical reasoning, creativity, and the ability to frame problems in ways AI cannot. Of course, the world I envision is one where humans are still our priority and the world is shaped to maximise for human happiness. It definitely is not the current state.
Even a superior AI (in some domains) could still benefit from human collaboration, not just as an input provider but as a co-evolving counterpart.
Your point about AI interaction tools is particularly compelling. Historically, technologies that seamlessly integrate into human habits—touch, vision, natural language—have seen widespread adoption. AI needs interfaces that lower friction and increase accessibility, much like how smartphones and browsers revolutionized digital interactions.
I forgot to mention vocal communication is another way to connect (which is already being used in abundance, but it’s more mechanical than desirable).
But I disagree that human collaboration is necessarily important for human-surpassing intelligent beings. If anything, I am afraid, that people will finally come to appreciate the ideas behind empathy, mercy and nobility when it’s too late. An apex organism doesn’t need you, it has its own ego and objectives. In some way, humans may want to learn to be like cats—cute, amusing and charming. I am only half-joking.
One way to avoid the dilemma of another apex organism is to not let it be in the first place—current AI is in a sweet spot for enabling human capability augmentation. To be honest, a lot of the current set of capabilities are not threatening or worrisome for anyone of any capability.
AI-human collaboration is at a critical juncture. As AI agents increasingly integrate into our digital workflows, I’ve been thinking about a fundamental challenge they face: the disconnect between how humans interact with software and how AI systems access it. Most application APIs are designed for data exchange, not for mimicking human interaction patterns. This gap is preventing AI from truly enhancing our productivity in the ways we’ve imagined. The solution might lie in an unexpected place: accessibility APIs, which could revolutionize how AI understands and navigates human-centered interfaces.
Automate your standups, get smart insights & have a history of all your standups
Help your team become more effective with Agilefox Standups
Simple to follow standup process
Daily standups are automatically generated from what the team has been working on each day
Allows teams to add personal updates and tasks outside of the Jira board
Identify, track and tackle blockers to make sure your team is unblocked at all times
Analyse your teams health
Analyse team mood on a daily basis
Get a live pulse on everything the team members are working on
Get a deeper view into your delivery
Clearly see the time spent on each issue
Track the workflow of each issue and task until it's done, to understand where your team can improve
Onefootball is the ultimate media platform that enables football fans to get their daily dose of news and scores wherever they are, created by a team of professionals from more than 35 different countries. With +30 engineers based in the heart of Berlin, our mission is to tell the world’s football stories through a stable, scalable and reliable stack. On a monthly base, we reach +30 Million passionate football fans all over the world.
Where I’d push back slightly is on framing this primarily as an LLM limitation. I don’t expect models to reason from first principles about entire systems, and I don’t think that’s what’s missing right now. The bigger gap I see is that we haven’t externalised design knowledge in a way that’s actionable.
We still rely on humans to reconstruct intent, boundaries, and "how work flows" every time they touch a part of the system. That reconstruction cost dominates, regardless of whether a human or an AI is writing the code.
I also don’t think small teams move faster because they’re shipping lower-quality or more experimental software (though that can be true). They move faster because the design surface is smaller and the work routing is clear. In large systems, the problem isn’t that AI can’t design; it’s that neither humans nor AI are given the right abstractions to work with.
Until we fix that, AI will mostly amplify what already exists: good flow in small systems, and friction in large ones.