This is sort of interesting, because normally people want GPT to suppress fringe views (ex, the vast body of evidence and literature is pro-vaccine, and they don't want it to promote low-evidence anti-vaccine views). But in this case, the fringe story is, by consensus on HN, the correct one despite the mindshare.
I guess the question is whether GPT is supposed to be an emulation of personhood or simply a software tool. Ex. I don’t think we’d want a software tool that lies, steals or cheats despite the fact that people do this
How do you generalize this principle?