Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Most LLMs output a whole bunch of tokens to help them reason through a problem, often called chain of thought, before giving the actual response. This has been shown to improve performance a lot but uses a lot of tokens
 help



Yup, they all need to do this in case you're asking them a really hard question like: "I really need to get my car washed, the car wash place is only 50 meters away, should I drive there or walk?"



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: