Hacker Newsnew | past | comments | ask | show | jobs | submit | evalstate's commentslogin

I think the paper is saying specifically that it's redundant to include information about your coding repository when that information is otherwise available to the agent in higher fidelity forms (e.g. package.json). This makes sense - but not sure it's about Skills directly.

For the former I'd be interested in learning more about that. From a harness perspective the difference would be the inclusion of the description in the system prompt, and an additional tool call to return the skill. While that's certainly less efficient than adding the context directly I'd be surprised if it degraded task performance significantly.

I tend to be quite focussed with my Skill/Tool usage in general though, inviting them in to context when needed rather than increasing the potential for model confusion.


Here you go:

Sorry, I miquoted the company, it was Vercel, not Cursor.

"A compressed 8KB docs index embedded directly in AGENTS.md achieved a 100% pass rate, while skills maxed out at 79% even with explicit instructions telling the agent to use them. Without those instructions, skills performed no better than having no documentation at all."

https://vercel.com/blog/agents-md-outperforms-skills-in-our-...


Gotcha - yeah, it removes the tool calling step so their content is always in context (noting they took action to try and reduce the size of that). The framing seems a little simplistic -- thanks for the link.


Anytime :)


fast-agent lets you do this as well (and has a skill in its default skills repo to help with automation/running in container/hf job).


Yes -- skills live in a special gap between "should have been a deterministic program" and "model already had the ability to figure this out". My personal experience leaves me in agreement that minimal system prompts are definitely the way to go.


An excellent piece of writing.

One thing I do find is that subagents are helpful for performance -- offloading tasks to smaller models (gpt-oss specifically for me) gets data to the bigger model quicker.


A lot of those books are more about persuasion than motivation - they can look similar from a distance.


fast-agent has ACP support and works well with ollama. Once installed you can just use `toad acp "fast-agent-acp --model generic.<ollama-model>"`.


I quite like the look of this one - seems to fit somewhere between the rigid structure of MCP Elicitations and the freeform nature of MCP-UI/Skybridge.


Structured Output in this case refers to the output from the MCP Server Tool Call, not the LLM itself.


Yes. VSCode 1.101.0 does, as well as fast-agent.

Earlier I posted about mcp-webcam (you can find it) which gives you a no-install way to try out Sampling if you like.


The list here https://modelcontextprotocol.io/clients has a number of Host applications, frameworks etc.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: