Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Counterarguement: This will be solved mostly by documentation.

Historically, most of my SO usage boils down to: 1) finding how to implement something esoteric that results in finding a clever solution or a under described feature flag in a function/tool 2) finding a workaround bugfix for a broken feature in some software (>70% of the time finding link to a github issue in the description

If we consider that LLMs are functionally an information retrieval function containing natural language program subroutines. In this context, a web-browser enabled LLM should be able to determine go to source and return a functional answer on a model that is not pretrained on the source.

So as long as there is good documentation on a particular piece of software, we should theoretically be able to generalize to non-existing tools. At least long enough for there to be a newly-created training dataset from people hitting the problem for the first time.

Side note: In some sense, the foundation model labs are aggregating the Question-Answer pairs (typically from stackoverflow) from their user data. I wouldn't be surprised if they created a stackoverflow clone at some point to opensource the dataset creation and labeling efforts.

This is basically what community notes is for X and now Facebook



Counter-counterargument: "So as long as there is good documentation" feels a bit like relying for success on the least important deliverable to people funding a project, and least interesting process step to people building it, going really well.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: