I answer those kinds of questions by piping my entire codebase into a large context model (like Claude or o3-mini or Gemini) and prompting it directly.
Here's a recent example:
files-to-prompt datasette tests -e py -c | \
llm -m gemini-2.0-flash-exp -u \
'which of these files contain tests that indirectly exercise the label_column_for_table() function'
Yek is fantastic -- I've converted my whole team to using it for prompting. As input context windows keep increasing, I think it'll just keep becoming more and more valuable -- I can put most of my team's code in Gemini 2.0 now.
I've had the occasional large prompt that cost ~30c - I often use GPT-4o mini if I'm going to ask follow-up questions because then I get prompt caching without having to configure it.
Here's a recent example:
https://gist.github.com/simonw/bee455c41d463abc6282a5c9c132c...