Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We ran the study with US-based participants rating English-language generated profiles only. I believe the findings would generalize to other cultures though: We found people associate certain language features such as first-person speech, family topics, speaking about past events, with humanity. These false intuitions may have developed in interpersonal interactions or sci-fi movies, and it's unlikely that e.g. a French speaker would have acquired "better" intuitions. With regards to grammar, people were more likely to rate text with grammatical issues as generated. However, turns out that heuristics didn't work either, as self-presentations written by people had more grammatical issues than those generated by GPT-3. That being said, the strongest generative models to date are primarily trained on English data and models in other languages don't quite perform as well yet.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: