Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> It doesn't need to be general intelligence or perfectly map to human intelligence.

> All it needs to be is useful.

Computers were already useful.

The only definition we have for "intelligence" is human (or, generally, animal) intelligence. If LLMs aren't that, let's call it something else.



What exactly is human (or animal) intelligence? How do you define that?


Does it matter? If LLMs aren't that, whatever it is, then we should use a different word. Finders keepers.


How do you know that LLMs “aren’t that” if you can’t even define what that is?

“I’ll know it when I see it” isn’t a compelling argument.


I think a successful high level intelligence should quickly accelerate or converge to infinity/physical resource exhaustion because they can now work on improving themselves.

So if above human intelligence does happen, I'd assume we'd know it, quite soon.


> “I’ll know it when I see it” isn’t a compelling argument.

It feels compelling to me.


they can't do what we do therefore they aren't what we are


And what is that, in concrete terms? Many humans can’t do what other humans can do. What is the common subset that counts as human intelligence?


Process vision and sounds in parallel for 80+ years, rapidly adapt to changing environments and scenarios, correlate seemingly irrelevant details that happened a week ago or years ago, be able to selectively ignore instructions and know when to disagree




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: