Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

  "AI is not going to ... destroy the world."
Bare assertion fallacy? This question is hotly debated and I don't believe it can be so easily dismissed like that. It is not obvious that aligning something much smarter than us will be a piece of cake.


Should I really add “in my opinion” to all the sentences I write? We are a smart bunch here. We can figure out when statements lack nuance in order to provoke some reaction.

We’re talking about the future here and a fairly complex one at that. So obviously I don’t know more than the next guy.


It's a really absurd opinion that AI will destroy the world, and one that does not deserve serious consideration in any research community. It's only in strange Rationalist corners and the companies in Silicon Valley that echo those corners that this is considered at all "hotly debated."


Why do you think it's absurd? If we do eventually create an AGI that is significantly smarter than us in most domains, why is it that we should expect to be able to keep it under control and doing what we want it to?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: