Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I think you're confusing prediction with ratiocination.

No, exactly not! Prediction is probabalistic and liable to be wrong, with those probabilities needing updating/refining.

Note that I'm primarily talking about prediction as the brain does it - not about LLMs, although LLMs have proved the power of prediction as a (the?) learning mechanism for language. Note though that the words predicted by LLMs are also just probabilities. These probabilities are sampled from (per a selected sampling "temperature" - degree of randomness) to pick which word to actually output.

The way the brain learns, from a starting point of knowing nothing, is to observe and predict that the same will happen next time, which it often will, once you've learnt what observations are appropriate to include or exclude from that prediction. This is all highly probabalistic, which is appropriate given that the thing being predicted (what'll happen if I throw a rock at that tiger?) is often semi-random in nature.

We can better rephrase "intelligence is ability to predict well", as "intelligence derives from ability to predict well". It does of course also depend on experience.

One reason why LLMs are so expensive to train is because they learn in an extremely brute force fashion from the highly redundant and repetitive output of others. Humans don't do that - if we're trying to learn something, or curious about it, we'll do focused experiments such as "Let's see what happens if I do this, since I don't already know", or "If I'm understanding this right, then if I do X then Y should happen".



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: