Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

No.

A genuinely sympathetic paraphrase might be:

"Machine superintelligence may or may not be controllable. If we do nothing to regulate it, or to prevent horrible outcomes, we will with X > [too big] probability find ourselves doomed.

We need to find a way to reduce X. I propose regulation is at least not likely to be counter-productive, and may be strictly incrementally useful."



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: