There are parallels. You feed an LLM context data and then tell it what to focus on so it can pull relevant data.
Maybe the entire process isn't like feeding an LLM, but that step is. Relevance identification is an interesting part of the process. The LLM can do a decent job of making connections, but it doesn't know what is relevant. In the longer time frame of the thinking process, we constantly throw out data as irrelevant or identify previously unknown relevant data that needs to be added. It's a part of the process completely outside of the LLM.
Maybe the entire process isn't like feeding an LLM, but that step is. Relevance identification is an interesting part of the process. The LLM can do a decent job of making connections, but it doesn't know what is relevant. In the longer time frame of the thinking process, we constantly throw out data as irrelevant or identify previously unknown relevant data that needs to be added. It's a part of the process completely outside of the LLM.