Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is very interesting. I don't see much discussion of interpretability in day to the day discourse of AI builders. I wonder if everyone assumes it to either be solved, or to be too out of reach to bother stopping and thinking about.
 help



Most interpretability techniques haven't yet to be shown to be useful for everyday model pipelines. However, the field is working hard to change this.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: