Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Would this situation have been handled differently if a human support rep gave them incorrect information? I suspect they would have honored it and then put the rep (or all reps) through more training.

Another thought experiment: If a portion of the company's website was at least partially generated with an LLM, does that somehow absolve the company of responsibility for the content they have on their own site?

I think a company is free to present information to their customers that is less than 100% accurate -- whether by having chatbots or by doing something else silly like having untrained, poorly-paid support reps -- but they have to live with the risks (being liable for mistakes; alienating customers) to get the benefits (low operating cost).



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: