Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's a really absurd opinion that AI will destroy the world, and one that does not deserve serious consideration in any research community. It's only in strange Rationalist corners and the companies in Silicon Valley that echo those corners that this is considered at all "hotly debated."


Why do you think it's absurd? If we do eventually create an AGI that is significantly smarter than us in most domains, why is it that we should expect to be able to keep it under control and doing what we want it to?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: