Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think that is a cool way of looking at it - build on what you said, to me it might be a bit like in both instances (dreaming and llms) - it's maybe a bit related to what they are trying to do (presuming dreams have a purpose) + the resource they have to do it in + the context they have to couple in to get the point across + something related to the abilities of the user? Lets for fun say there is a subsystem that understands and runs dreaming, you only have so much time, plus it's a weird modality, and you're trying to do something... maybe it's fine enough to just serve up the story no matter how muddled it is, what you might be talking about in LLMs is a similar thing? Dreams have a ticking clock where the brain chemistry is literally changing and the opportunity for that type of processing is about to disappear, and eventually the human will awaken, LLMs have a context window size. Fun thinking, anyway.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: