Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm going to leave the above stand even with downvotes. It's first time I've tried to express quite this opinion, and it's definitely a tricky one to get right.

Thing is, we need to have ways to reason about how LLMs interact with human emotions.

Sure: The consciousness and sentience questions are fun philosophy. Meanwhile purely the affect processing side of things is becoming important to safety engineering; and can't really be ignored for much longer.

This is pretty much within the realm of what Anthropic has been saying all along of course; but other companies need to stop ignoring it, because folks are getting hurt.

I hope at least this much is uncontroversial.

 help



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: