Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Failing the test may prove the AI is not intelligent. Passing the test doesn't necessarily prove it is.


Your comment reminds me of this quote from a book published in the 80s:

> There is a related “Theorem” about progress in AI: once some mental function is programmed, people soon cease to consider it as an essential ingredient of “real thinking”. The ineluctable core of intelligence is always in that next thing which hasn’t yet been programmed. This “Theorem” was first proposed to me by Larry Tesler, so I call it Tesler’s Theorem: “AI is whatever hasn’t been done yet.”


I've always disliked this argument. A person can do something well without devising a general solution to the thing. Devising a general solution to the thing is a step we're talking all the time with all sorts of things, but it doesn't invalidate the cool fact about intelligence: whatever it is that lets us do the thing well without the general solution is hard to pin down and hard to reproduce.

All that's invalidated each time is the idea that a general solution to that task requires a general solution to all tasks, or that a general solution to that task requires our special sauce. It's the idea that something able to to that task will also be able to do XYZ.

And yet people keep coming up with a new task that people point to saying, 'this is the one! there's no way something could solve this one without also being able to do XYZ!'


id consider that it doing the test at all, without proper compensation is a sign that it isnt intelligent


Motivation is not hard to instill. Fortunately, they have chosen not to do so.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: