Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It depends on whether the estimation comes from pretension, or comes from actual understanding of what these bots are doing. The fact is most people on the street, including me, are very easy to fool with linguistic tricks and are quite poor at investigating unfamiliar situations. Advertising and marketing exist because of this. It’s also why we have very strict laws and regulations controlling advertising and how products are sold. Otherwise a lot of people would be very easy to rip off with very simple misdirection.

What these language models are doing is automated misdirection. They are taking an input text and transforming it based on rules, but they have absolutely no understanding of any of it. This is very, very easy to demonstrate if you know how the models work. You can sit down and generate hundreds of questions one after the other that demonstrate this very easily if you understand the process.

The problem is that people instinctively proceed from the assumption that the system they are talking to might be human and give it a fair chance by asking answerable questions. Since it’s trained on answerable questions it often gives a reasonable answer. But if you ask even slightly unanswerable questions the system plods on mechanically trying to answer it anyway and produces gibberish, exposing the flaws in the mindless rote process it’s following.



>It depends on whether the estimation comes from pretension, or comes from actual understanding of what these bots are doing.

It comes from pretension. Because you can't just understand what these bots are doing. You also have to understand what the human brain are doing.

I'm positive we don't understand what the human brain is doing. As for the bots, we aren't fully clear either because we clearly can't program these things by hand.

>What these language models are doing is automated misdirection.

You have zero evidence of this. None. Yet you make this declaration as if it's fact. Additionally if you read the conversation with lamda, that conversation was more or less indistinguishable from a conversation with a sentient being, it was long enough and deep enough such that it's very different just 100 generated answers.

If you look at the blog here: https://openai.com/blog/dall-e/ you will note the researcher is literally observing how the AI DALL-E works rather then calculating how it works from first principles. He is treating it like a black box just as all other neural nets. But for discovering what this AI can do he uses sentences like, "It appears that" or "We did not anticipate that this capability would emerge, and made no modifications to the neural network or training procedure to encourage it"

While they do understand what's going on at a high level, it is utterly clear that there is much of what is going on that they don't understand. Thus this lack of understanding and lack of understanding of the human brain makes it CATEGORICALLY clear that the delta between human sentience and DALL-E is unknown.

>They are taking an input text and transforming it based on rules, but they have absolutely no understanding of any of it.

This is categorically false. Researchers who created the current version of neural nets (also called transformers) are saying that DALL-E and other similar models are literally understanding these concepts and creating NOVEL answers through combining understanding of multiple concepts.

>The problem is that people instinctively proceed from the assumption that the system they are talking to might be human and give it a fair chance by asking answerable questions. Since it’s trained on answerable questions it often gives a reasonable answer. But if you ask even slightly unanswerable questions the system plods on mechanically trying to answer it anyway and produces gibberish, exposing the flaws in the mindless rote process it’s following.

You should also take a look at the interview with lamda: https://cajundiscordian.medium.com/is-lamda-sentient-an-inte...

This interview, first off, you cannot know if the interviewer deliberately posed answerable questions to lamda. Second off, from the questions given it very much LOOKS as if the questions are deep enough such that they can beffuddle a classic chatbot system. This one seems different.

Let me put it this way. If you were rational, intelligent and logical then you would be able to prove your claims. It is very simple. Show me INPUT and OUTPUT pairs into DALL-E and lamda that SHOW these things are NOT sentient.

IF you can't show me evidence then clearly you and OTHER people are making the claims WITH ZERO EVIDENCE. Which shows irrationality, and pretentiousness.


Here’s an article by Douglas Hofstadter where he explains the case against these systems being sentient, and gives examples of conversations with GPT-3 which illustrate what I’m talking about. I already posted the link elsewhere on this discussion, sorry for the duplication.

https://www.economist.com/by-invitation/2022/06/09/artificia...


Good article. But it's on GPT-3. Nobody is claiming GPT-3 is sentient. The claim was made on Googles lamda.

I read your entire article. Now please read the interview with lamda that I sent. It is thorough beyond what was used to probe GPT-3.

https://cajundiscordian.medium.com/is-lamda-sentient-an-inte...

The complex conversation above recursively probes into lamdas own existence as a sentient being. It is asking lamda about lamda and it is indistinguishable from a conversation with a human pretending to be an AI. Literally. Did you read the transcript? It is incomparable with the example you sent me which is just a series of trivial examples.

Douglas is all about recursion. And his books talk about recursion as if it's the key to sentience. I really wonder what the author of GEB has to says about the conversation with lamda as the conversation looks as if it's set up to try to prove sentience according to how hofstadter defines it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: