Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Only if you use LLMs wrong. Today's models have deep research which will generate a comprehensive analysis with proper citations


I feel like I should point out that's the dialog engine not the model itself.


Yes, I think that is understood by everyone


You'd be surprised. A number of fairly technical people who are just not that familiar with ML I know got confused by this and believed the models were actually being tuned daily. I don't think that's universally understood at all.

That has actual practical implications and isn't just pedantry. People might like some model and avoid better dialog engines like perplexity believing they'd have to switch.


I meant "everyone" in the context of HN ;)


sorry but you’re underestimating the number of people who come here, and the range of backgrounds (and interests) they have


I think you either didn't read my response or missed the point. No matter if the LLM output is useful or not, the learning outcome is hugely impacted. Negatively.

It's like copying on your homework assignments. Looks like it gets the job done, but the point of a homework assignment is not the result you deliver, it's the process of creating that result which makes you learn something.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: