Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It sounds like you’re saying that transfer learning is helpful for image classification, which seems like an uncontentious position.

Are you really arguing that you think transfer learning would be useful from handwriting models to turbine failure models?

Using techniques that are successful with image classification as an example and generalizing to other domains that don’t look much like imaging seems like a stretch to me.

But perhaps I’ve missed some more convincing examples of the state of the art in transfer learning.



That's a good point, as far as I know there's no examples of cross-domain learning. There's new work in NLP for cross-task transfer learning, but that's as close as it gets at the moment.

It's hard to imagine there's anything to learn from handwriting images that could apply to turbine failure; a much broader kind of multi-task model than anything well see for awhile.


The argument is still false. You can very well get an advantage from vast amounts of data in similar domains. And more importantly you can have ML insights not possible without it. What if ImageNet was not open to the public? Would we get an AlexNet breakthrough?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: