Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> When the people who have real, expert knowledge of something all tell you one thing and the people who have something to gain from getting your attention by promoting sensational opinions and cherry-picked facts tell you something else you should rationally assess their motives and weight that when deciding who's views more closely approximate the truth.

Those with deep expertise in machine intelligence (and related foundational concepts, such as philosophy of mind, linguistics, psychology, neuroscience, etc.) definitely do not "all tell you one thing" (hence the "ethics board" for DeepMind). I won't claim to be such an expert (though I have multiple degrees on related topics), but if you, e.g., review Nick Bostrom's C.V., and read his work, you'll find few people more qualified to comment. He's brought a very sober, clear-headed, and decidedly non-sensationalistic assessment to these issues, and devoted an entire book to the risk posed by "Superintelligence". When very smart, knowledgeable, thoughtful, and seemingly well-adjusted people are willing to put themselves "out there", it's worth paying attention.

Exponential change looks tiny until it's really not. If recursively self-improving A.I. is possible, it might only require one relatively short bit of code to get off the ground, and then it's basically game-over (depending on the A.I.'s objective function). Many people possess imaginations rich enough to see how this could come to be.

Further, claiming that something which has a clear and rational path to becoming dangerous doesn't "have the potential to cause such great harm", especially when lots of relevant trend lines are nearing vertical, is extremely foolish.

It is incredibly easy to unsympathetically criticize another viewpoint, especially when that viewpoint is outside of the mainstream. Those espousing such "sensational opinions" rarely win more friends than they lose, as your (and many others') comments would attest to.



something which has a clear and rational path to becoming dangerous

But this is exactly the point, it DOESN'T have a clear and rational path. Go read Superintelligence again, or go read Global Catastrophic Risks or any of the other books like "Our last invention." All of it, across the board is wild speculation about paperclip maximizers and out of control drones.

There is no path, no one has a path - not even AGI researchers, the people trying to build the thing for god's sake!!


... or, for that matter, "What computers still can't do" by Hubert Dreyfus.

> There is no path, no one has a path

This seems like a very difficult statement to support, a claim that is consequently far less rational than, say, "deep belief networks' ability to automatically extract meaning from real-world data will increase in scope to encompass broader and broader domains, eventually including natural language."

We have billions of examples of human-level intelligence walking around. Humans aren't magical, and our ability to create computer simulations of real world phenomena is steadily increasing.

Furthermore, the past decade has seen multiple machine learning triumphs that many AI researchers thought were 25+ years away: self-driving cars, machine translation, high accuracy speech recognition, visual image content extraction. We have been continuously surprised, and these surprises are unlikely to stop. There's no reason to believe that human-level intelligence is particularly special or difficult to achieve -- those asserting otherwise have the higher burden of proof.


This seems like a very difficult statement to support

Notice, I am not saying it is not plausible or realistic, I think it is. I also think that there is a fairly short time horizon (<100 years) based on the state of computing currently.

That doesn't mean we know what the path is though. So will it come through scaled ANN? Maybe WBE? Will it be an emergent property from all of the routers in the world exchanging state information? No one knows.

Furthermore, the past decade has seen multiple machine learning triumphs that many AI researchers thought were 25+ years away: self-driving cars, machine translation, high accuracy speech recognition, visual image content extraction.

How many times does the professional AI community have to repeat this?: Narrow AI projects do not necessarily have trajectories toward AGI. Yann Lecun JUST REITERATED THIS again last week. Seriously, how many times does it have to be said for people to understand it?

Yes there is progress in machine learning, but those say almost nothing about Artificial General Intelligence which is magnitudes of difference.

So again, there is no PATH TO AGI. No one can sketch what approach, if any will get us there a priori because there is so much we don't know about intelligence generally and all of the subsets of problems within it.


Many know a lot about intelligence, but it's piecemeal and has not been adequately integrated -- even if there was an excellent theory/model/account, difficulties of implementation, testing, or comprehension could delay (or even prevent!) such theories from gaining popular acceptance (see, e.g., conceptual blending). I chalk this up to epistemological and organizational problems as much as to ones of complexity and the difficulty of acquiring data.

Further, there seems to be a fallacy implicit in the line of thought expressed in your comment, along the lines of: "because there's no generally agreed upon positive account for how cognition works, AGI is impossible in the near term." The fact is there does not need to be any generally agreed upon positive account of intelligence for us to be worried about AGI. Excellent accounts of how intelligence work can be contained in the minds of a few researchers who aren't going to the trouble of publishing them and proving them to others. Instead, they're just hard at work on the highest payoff activity: designing software that realizes and proves their vision/idea.

We have little idea of the progress such teams are making, or the goodness of the cognitive models they're working from. And only one of these individuals/teams needs to be right.


"because there's no generally agreed upon positive account for how cognition works, AGI is impossible in the near term."

Mis-characterization as it's completely plausible that we can get to AGI without emulating cognition at all. So that is explicitly not the point I am making and no one is even stating as much.

You said it yourself though:

but it's piecemeal and has not been adequately integrated

Integration is the foundation of GENERAL intelligence and is exactly what I am saying. How learning across domains works is a black box - like total black box right now - which means we can't build a roadmap to it without probing the edges more.

Excellent accounts of how intelligence work can be contained in the minds of a few researchers who aren't going to the trouble of publishing them and proving them to others. Instead, they're just hard at work on the highest payoff activity: designing software that realizes and proves their vision/idea.

Hooray, a hypothesis! Is there anything that you can point to that would support the idea of lone wolf AGI developers? In my study there isn't due primarily to computational and mathematical requirements that take a community to support. Even in such a case there is basically nothing we can do about it because it's unknowable - like lone wolf terrorists. So practically it's not worth discussing. Note also that this isn't even what Bostrom et. al are discussing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: