>The fundamental problem with today's LLMs that will prevent them from achieving human level intelligence, and creativity, is that they are trained to predict training set continuations, which creates two very major limitations:
I am of the opinion that imagination and creativity comes from emotion, hence a machine that cannot "feel" will never be truly intelligent.
One can go ahead and ask, but you are just a lump of meat, if you can feel, then so a computer of similar structure can.
If we assume that physical reality is fundamental, then that might make sense. But what if consciousness is fundamental and reality plays on consciousness?
Then randomness, and in-turn ideas come from the attributes of the fundamental reality that we are in.
I ll try to simplify it. Imagine you having an idea that extends your life for a day. Then from all the possible worlds, in some worlds, you find yourselves living in the next day (in others you are dead). But this "idea" you had, was just one among the infinite sea of possibilities, and your consciousness inside one such world observes you having that idea and survive for a day!
If you want to create a machine that can do that, it implies that you should be a consciousness inside a world in it (because the machine cannot pick valid worlds from infinite samples, but just enables consciousness to exists such suitable worlds). So it cannot be done in our reality!
Mayyyyy be "Quantum Darwinism" is what I am trying to describe here..
> I am of the opinion that imagination and creativity comes from emotion
How do you see emotion as being necessary for creativity?
It sure seems that things like surprise (prediction failure) driven "curiosity" and exploration (I can't predict what will happen if I do X, so let me try) are behind creativity, pushing the boundaries of knowledge and discovering something new.
Perhaps you mean artistic creativity rather than scientific, in which case we're talking about different things, but I'd agree with you since the goal of much art is to elicit an emotional response in those engaging with it.
I don't think there is anything stopping us from implementing emotions, every bit as real as our own, in some form of artificial life if we want to though. At the end of the day emotion comes down to our primitive brain releasing chemicals like adrenaline, dopamine, etc as a result of certain stimuli, the functioning of our brain/body being affected by those chemicals, and the feedback loop of us then recognizing how our brain/body is operating differently ("I feel sad/exited/afraid" etc). It's all very mechanical.
FWIW I think consciousness is also very mechanical, but it seems somewhat irrelevant to the discussion of intelligence/AGI.
Yea, its our nature to feel good about it, which is what evolution does. If you are curious, and if exploration makes you feel good, you have a better chance to survive, and you pass that trait along.
It seems we're basically in agreement, not arguing!
The only quibble I have is whether "feeling good" is the right way to describe how evolution has made us choose to engage in exploration/etc. I don't think it's quite as simple as evolution making things that are good for us feel good, and making things that are bad for us feel bad.
There are a bunch of neurotransmitters and hormones that control how we behave. Evolution discourages us from doing things that are bad for us via a range of emotions including things like fear and disgust (not just "feeling bad"). Evolution also encourages us to do, or keep on doing, things that are good for us via a range of emotions such as enjoyment (this is a tasty fruit), contentment (this feels nice - I'll keep doing it) to curiosity, again not just "feeling good". I think curiosity and exploration (which may lead to learning and discovery, which are good for us) are based around attention and focus ... rather than feeling good, it feels interesting.
I'd say that motivation and feeling are largely unrelated.
We do what we do, not because of motivation, but because that what we've evolved to do.
Feeling really comes after the fact, or independent of it, when we're introspecting on what we've already done (courtesy of evolution), and how we feel about that, or how can we explain it!
There is tons of evidence that this is the way our brains work - we do things "because" then (if asked, or if thinking about it) concoct post-hoc explanations of why we did it. An example of this is split brain patients, where one half of the brain happily explains why the other half did what it did, despite there being no connection between the two (nor any subjective feeling by the patient that there is anything amiss with their brain)!
But sure, we have feelings, closely related to qualia you could say. It does "feel like something" to be depressed, or excited, or inspired, or whatever. I don't see any big mystery to this - our brain is able to self-observe and not surprisingly able to detect it's own varied patterns of operation (including ones induced by brain chemicals, natural or otherwise).
I presume where you may want to go with this is "but why does it feel like anything? why is there any subjective experience at all (and would a machine have it too)?", and I think the answer is that this is just an emergent property of having a cognitive apparatus capable of self-observation. We can already see the glimmerings of this in LLMs, maybe a bad example since their thoughts are derived from humans, but nonetheless LLMs self-report as if they are conscious, and have existential discussions about it on Moltbook. It's hard to imagine (to bring this back on topic) that LeCun's animal intelligence, basically something LLM-like that is trained from scratch (no baked in human knowledge) wouldn't report the exact same thing.
Sort of ... (depends on what you mean by "feel" - detects, or is consciously aware of)
I think the way this works is something like this:
1) Our body / brain will detect that we're low on energy and release neurotransmitters indicating this
2) Our body may also provide physical indications of hunger based on empty stomach
3) These hunger-detection (or i-should-eat) signals may directly trigger behavior patterns related to finding food. In a primitive animal or baby this may be direct (baby starts crying, mom provides food), and as a human adult it may include triggering past patterns of food finding (go to the kitchen) when we felt like this before
4) In parallel with 3), the evolutionally newer part of our brain will eventually recognize what's going on, and that we're feeling hungry, and if we haven't already done anything about it then we might make a more deliberate plan to do something about it "i'm hungry, should get some lunch, what will i have .."
So, I think it's a combination of 3) and 4).
I'd guess there are probably also some animals, maybe some humans, that feed instinctively, more or less all the time, and perhaps have more of an "i'm full, off switch" than an "i'm hungry, on switch".
I don't know - this is obviously partly guess work. Certainly if you ask someone in the kitchen grabbing a snack why they are doing it they will provide a post-hoc explanation of "i'm feeling hungry", even if the "decision" to do it was subconscious.
I'm just being honest. Anyone who tells you they know what its like to be a bat, or any other animal, and what drives it's behavior is lying.
"Animals eat when they're hungry" is fine story, and obviously there is a lot of truth to it, but there are also obviously a lot of exceptions too.
Do you really think the Grizzly, preparing to hibernate, who can barely walk because he is so fat, and is eating his 50th salmon of the day, is eating because he is hungry?
What about the baby birds, mouths open when parent returns with food? Hungry, or just instinct?
What about an alligator that can go months without eating. In captivity it will eat every day if food is offered, and get obese (so keepers don't do that). Is it eating when hungry?
What about grazing animals that eat nutritionally poor food, and spend most of their waking hours eating? An elephant eats for 12-18 hours a day. Is it always hungry, or has evolution just given it survival instincts to behave like this?
When the bat leaving it's cave at dusk to go catch insects, is it hungry? What is it like to be a bat?
I am of the opinion that imagination and creativity comes from emotion, hence a machine that cannot "feel" will never be truly intelligent.
One can go ahead and ask, but you are just a lump of meat, if you can feel, then so a computer of similar structure can.
If we assume that physical reality is fundamental, then that might make sense. But what if consciousness is fundamental and reality plays on consciousness?
Then randomness, and in-turn ideas come from the attributes of the fundamental reality that we are in.
I ll try to simplify it. Imagine you having an idea that extends your life for a day. Then from all the possible worlds, in some worlds, you find yourselves living in the next day (in others you are dead). But this "idea" you had, was just one among the infinite sea of possibilities, and your consciousness inside one such world observes you having that idea and survive for a day!
If you want to create a machine that can do that, it implies that you should be a consciousness inside a world in it (because the machine cannot pick valid worlds from infinite samples, but just enables consciousness to exists such suitable worlds). So it cannot be done in our reality!
Mayyyyy be "Quantum Darwinism" is what I am trying to describe here..