Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I answer those kinds of questions by piping my entire codebase into a large context model (like Claude or o3-mini or Gemini) and prompting it directly.

Here's a recent example:

    files-to-prompt datasette tests -e py -c | \
      llm -m gemini-2.0-flash-exp -u \
      'which of these files contain tests that indirectly exercise the label_column_for_table() function'
https://gist.github.com/simonw/bee455c41d463abc6282a5c9c132c...


Your code is pretty small if it fits within the context if any major LLM. But very nice if it does!


https://github.com/bodo-run/yek

This is more sophisticated for serializing your repo. Please check it out and let me know what do you think?


Yek is fantastic -- I've converted my whole team to using it for prompting. As input context windows keep increasing, I think it'll just keep becoming more and more valuable -- I can put most of my team's code in Gemini 2.0 now.


doesn't it get very expensive quickly if you don't use prompt cashing


I've had the occasional large prompt that cost ~30c - I often use GPT-4o mini if I'm going to ask follow-up questions because then I get prompt caching without having to configure it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: