Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think it's better to say that LLMs only hallucinate. All the text they produce is entirely unverified. Humans are the ones reading the text and constructing meaning.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: