Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This seems like a very difficult statement to support

Notice, I am not saying it is not plausible or realistic, I think it is. I also think that there is a fairly short time horizon (<100 years) based on the state of computing currently.

That doesn't mean we know what the path is though. So will it come through scaled ANN? Maybe WBE? Will it be an emergent property from all of the routers in the world exchanging state information? No one knows.

Furthermore, the past decade has seen multiple machine learning triumphs that many AI researchers thought were 25+ years away: self-driving cars, machine translation, high accuracy speech recognition, visual image content extraction.

How many times does the professional AI community have to repeat this?: Narrow AI projects do not necessarily have trajectories toward AGI. Yann Lecun JUST REITERATED THIS again last week. Seriously, how many times does it have to be said for people to understand it?

Yes there is progress in machine learning, but those say almost nothing about Artificial General Intelligence which is magnitudes of difference.

So again, there is no PATH TO AGI. No one can sketch what approach, if any will get us there a priori because there is so much we don't know about intelligence generally and all of the subsets of problems within it.



Many know a lot about intelligence, but it's piecemeal and has not been adequately integrated -- even if there was an excellent theory/model/account, difficulties of implementation, testing, or comprehension could delay (or even prevent!) such theories from gaining popular acceptance (see, e.g., conceptual blending). I chalk this up to epistemological and organizational problems as much as to ones of complexity and the difficulty of acquiring data.

Further, there seems to be a fallacy implicit in the line of thought expressed in your comment, along the lines of: "because there's no generally agreed upon positive account for how cognition works, AGI is impossible in the near term." The fact is there does not need to be any generally agreed upon positive account of intelligence for us to be worried about AGI. Excellent accounts of how intelligence work can be contained in the minds of a few researchers who aren't going to the trouble of publishing them and proving them to others. Instead, they're just hard at work on the highest payoff activity: designing software that realizes and proves their vision/idea.

We have little idea of the progress such teams are making, or the goodness of the cognitive models they're working from. And only one of these individuals/teams needs to be right.


"because there's no generally agreed upon positive account for how cognition works, AGI is impossible in the near term."

Mis-characterization as it's completely plausible that we can get to AGI without emulating cognition at all. So that is explicitly not the point I am making and no one is even stating as much.

You said it yourself though:

but it's piecemeal and has not been adequately integrated

Integration is the foundation of GENERAL intelligence and is exactly what I am saying. How learning across domains works is a black box - like total black box right now - which means we can't build a roadmap to it without probing the edges more.

Excellent accounts of how intelligence work can be contained in the minds of a few researchers who aren't going to the trouble of publishing them and proving them to others. Instead, they're just hard at work on the highest payoff activity: designing software that realizes and proves their vision/idea.

Hooray, a hypothesis! Is there anything that you can point to that would support the idea of lone wolf AGI developers? In my study there isn't due primarily to computational and mathematical requirements that take a community to support. Even in such a case there is basically nothing we can do about it because it's unknowable - like lone wolf terrorists. So practically it's not worth discussing. Note also that this isn't even what Bostrom et. al are discussing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: