Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This post feels childish and passive aggressive. Just make your argument, if you have one.


Exactly my thoughts. A hint of condescension and lack of self-awareness - the camp Sam Altman seems to be a part of ("regulate AI, it's an existential threat to humanity!") is just as much a prediction of the future as anything else. Yet, somehow, he seems to be subtly implying that he's "more" correct.

Marc Andreessen has been relatively level-headed about the topic of AI recently on Twitter, and it would be nice to see other industry figureheads be less emotionally involved and more scientifically rigorous in their assessment of the industry. The debate is devolving into an ego battle (especially with a post like this!), and it's rather unfortunate.

Edit: Additionally, Altman appears to be primarily attacking a strawman with this article. "Superhuman" intelligence already exists. The emergent intelligence (via technological amplification) of society is, by definition, super-human. What's less realistic is anticipating a human-like artificial intelligence that would, in any way, represent an existential threat to the human race. There are many, many problems with the latter argument. (From a technological, philosophical, economic, and evolutionary perspective.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: