Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Personally, I'm a pessimist on this front. People assert that a model-in-training can effortlessly sift out the real data from mountains of LLM spam. But then people also assert that AI detectors do not work and can never work, since LLM output is simply too good, and any watermarking can be broken up by a light paraphrasing step. It doesn't make much sense to have it both ways.

I can only await companies' attempts to publish enough junk to create an 'alternative truth' for new LLMs to believe in. The worst part is, it might even work.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: