Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The difficulty with general intelligence isn't so much that any particular problem is hard to solve, but that creating a way to generalize from specific problems to solving any arbitrary problem is very difficult. Given enough time researchers do pretty well at creating "intelligent" machines, but these are all focused on very specific, narrow problems. For example, we can classify tweet sentiment pretty well, but we can't summarize documents yet.

The reason that the general cases become more difficult is because success is less well defined and the number of possibilities is usually bigger. For example with tweet sentiment there aren't that many possibly results (in the simplest case it's just binary: positive or negative). But say you want to summarize a document, well that's much harder, since how do you even know when a document is well summarized? And assuming you can know when a document is well summarized, how do you get enough data to train your model on?

But even the more general case of summarizing documents is still pretty narrow. It's just dealing with text. Now what if you want your document summarizing robot to learn when to drive your car to pick you up from you doctors appointment. Well now your robot needs to know about your schedule, it needs to know about cars, it needs to know about traffic laws, and so on.

So the more you generalize the more data your machine learning algorithm needs to know to draw the proper conclusions. And that data either needs to be hard coded into it or otherwise fed into it so that it can learn. If it's hard coded then you have a labor problem, since it takes lots of time for someone to write all the things we take for granted into a way that a machine can understand and if you try to get it to learn it then you have the problem of needing to specify what a good and bad outcome is like we did above with document classification.

So the reason I don't think scaling up is possible is because we don't have good ways to measure what a good outcome looks like.

I don't expect a breakthrough to come, rather, I see steady progress as the norm, but I think what can improve the capabilities of current AI and ML techniques is to find some way of baking in basic facts and logic into our programs. For example, you were born with the ability to recognize when someone is angry with you or glad to see you and how to reason spatially, etc. If we can do something analogous with machines then I think that would be a big improvement. But again, it's really hard to say how to do this, since if we hard code it we have problems and if we try to learn it we also have problems, as I noted earlier.



Not sure you really answered the question, which I found to be a very interesting one. The OP starts with these two assumptions, which it sounds like you agree with:

1) SMI is likely to occur at some point in the future. 2) SMI is likely to pose a significant threat to humanity.

Given those, it follows that we should be worried and plan for how to deal with SMI at some point in the future, ideally before it is developed. So the question is, what does the world need to look like before we worry/plan for it? And why isn't now a good time to start?


I don't think it is worth thinking about at this point since it is so speculative.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: