That’s actually an awesome idea and totally helps to reduce wasting context size - move repeatable instructions to a SKILL.md, and once they’re repeatable and no longer have variability to input, turn it into a tool! Rinse repeat.
Oh nice, you could even eventually turn the whole process including inference into an app so that you’ve cut out the LLM from the whole process saving you execution time
Oh nice, you could even eventually turn the whole process including inference into an app so that you’ve cut out the LLM from the whole process saving you execution time