Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I feel like I'm taking crazy pills sometimes. You have a file with a set of snippets and you prefer to ask the AI to hopefully run them instead of just running it yourself?


The commands aren't the special sauce, it's the analytical capabilities of the LLM to view the outputs of all those commands and correlate data or whatever. You could accomplish the same by prefilling a gigantic context window with all the logs but when the commands are presented ahead of time the LLM can "decide" which one to run based on what it needs to do.


the snippets are examples. You can ask hundreds of variations of similar, but different, complex questions and the LLM can adjust the example for that need.

I don't have a snippet for, "find all 500's for the meltano service for duckdb syntax errors", but it'd easily nail that given the existing examples.


but if I know enough about the service to write examples, most of the time I will know the command I want, which is less typing, faster, costs less, and doesn't waste a ton of electricity.

In the other cases I see what the computer outputs, LEARN, and then the functionality of finding what I need just isn't useful next time. Next time I just type the command.

I don't get it.


LLMs are really good at processing vague descriptions of problems and offering a solution that's reasonably close to the mark. They can be a great guide for unfamiliar tools.

For example, I have a pretty good grasp of regular expressions because I'm an old Perl programmer, but I find processing json using `jq` utterly baffling. LLMs are great at coming up with useful examples, and sometimes they'll even get it perfect the first time. I've learned more about properly using `jq` with the help of LLMs than I ever did on my own. Same goes for `ffmpeg`.

LLMs are not a substitute for learning. When used properly, they're an enhancement to learning.

Likewise, never mind the idiot CEOs of failing companies looking forward to laying off half their workforce and replacing them with AI. When properly used, AI is a tool to help people become more productive, not replace human understanding.


Yes. I'm not the poster but I do something similar.

Because now the capabilities of the model grow over time. And I can ask questions that involve a handful of those snippets. When we get to something new that requires some doing, it becomes another snippet.

I can offload everything I used to know about an API and never have to think about it again.


The AI will run whatever command it figures out might work, which might be wasteful and taint the context with useless crap.

But when you give it tools for retrieving all client+server logs combined (for a web application), it can use it and just get what it needs as simply as possible.

Or it'll start finding a function by digging around code files with grep, if you provide a tool that just lists all functions, their parameters and locations, it'll find the exact spot in one go.


You dont ask the ai to run the commands. you say "build and test this feature" and then the AI correctly iterates back and forth between the build and test commands until the thing works.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: