I think the paper is saying specifically that it's redundant to include information about your coding repository when that information is otherwise available to the agent in higher fidelity forms (e.g. package.json). This makes sense - but not sure it's about Skills directly.
For the former I'd be interested in learning more about that. From a harness perspective the difference would be the inclusion of the description in the system prompt, and an additional tool call to return the skill. While that's certainly less efficient than adding the context directly I'd be surprised if it degraded task performance significantly.
I tend to be quite focussed with my Skill/Tool usage in general though, inviting them in to context when needed rather than increasing the potential for model confusion.
Sorry, I miquoted the company, it was Vercel, not Cursor.
"A compressed 8KB docs index embedded directly in AGENTS.md achieved a 100% pass rate, while skills maxed out at 79% even with explicit instructions telling the agent to use them. Without those instructions, skills performed no better than having no documentation at all."
Gotcha - yeah, it removes the tool calling step so their content is always in context (noting they took action to try and reduce the size of that). The framing seems a little simplistic -- thanks for the link.
Yes -- skills live in a special gap between "should have been a deterministic program" and "model already had the ability to figure this out". My personal experience leaves me in agreement that minimal system prompts are definitely the way to go.
One thing I do find is that subagents are helpful for performance -- offloading tasks to smaller models (gpt-oss specifically for me) gets data to the bigger model quicker.
I quite like the look of this one - seems to fit somewhere between the rigid structure of MCP Elicitations and the freeform nature of MCP-UI/Skybridge.
For the former I'd be interested in learning more about that. From a harness perspective the difference would be the inclusion of the description in the system prompt, and an additional tool call to return the skill. While that's certainly less efficient than adding the context directly I'd be surprised if it degraded task performance significantly.
I tend to be quite focussed with my Skill/Tool usage in general though, inviting them in to context when needed rather than increasing the potential for model confusion.