Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I see LLMs (AI if you will) as both regression and progression (and that's why I think people are so divided on it).

It's progression for obvious reasons - it automates natural language processing (human language), which even 5 years ago was an unsolved problem.

However, the regression part is less obvious. I think human progress, since enlightenment, was marked by increasingly externalizing human thoughts and judgement. That is, instead of relying on judgement of an authority (or intuition, lived and anecdotal experience, etc.), we increasingly demanded transparency and rigour in our thinking. This was facilitated by printing press and development of scientific method, but is seen everywhere (to the extent even philosophy split into analytical branch).

However, with LLMs, we return to internalized thinking, based on judgement, this time of an algorithm. There is no way extract formal argument from an LLM, and so no way to ensure correctness in some sense.

Whether you're an adopter or a skeptic depends where you stand on this issue (in a way, it's similar to type checking debate in programming). Personally, I think the correct way to use LLMs is to formalize the problem and then solve that using formal methods.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: