Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You _can_ fire up TF to solve real problems without being Google.

Transfer learning is _the_ way to do image classification for most kinds of images in 2018, and is covered heavily in most classes. In the fast.ai class, you use transfer learning in the very first lesson to build a dog/cat classifier. Takes less than an hour to get to 97+% accuracy with no prior knowledge of deep learning.



It sounds like you’re saying that transfer learning is helpful for image classification, which seems like an uncontentious position.

Are you really arguing that you think transfer learning would be useful from handwriting models to turbine failure models?

Using techniques that are successful with image classification as an example and generalizing to other domains that don’t look much like imaging seems like a stretch to me.

But perhaps I’ve missed some more convincing examples of the state of the art in transfer learning.


That's a good point, as far as I know there's no examples of cross-domain learning. There's new work in NLP for cross-task transfer learning, but that's as close as it gets at the moment.

It's hard to imagine there's anything to learn from handwriting images that could apply to turbine failure; a much broader kind of multi-task model than anything well see for awhile.


The argument is still false. You can very well get an advantage from vast amounts of data in similar domains. And more importantly you can have ML insights not possible without it. What if ImageNet was not open to the public? Would we get an AlexNet breakthrough?


But, the transfer takes place on a network that has already been trained with lots of dogs and cats and has been taught to differentiate different kinds of dogs from lots of other objects and different kinds of cats from other objects.

Getting a useful dog/cat classifier out of something that has been trained to differentiate between different kinds of boats instead of different kinds of mammals would be closer to what the OP aimed at.


I agree that using an ImageNet-trained model to classify a new set of subclasses should be easy, and is. Subsequent lessons show how to adapt the same approach to distinguishing dog breeds (more specific), and for identifying types of terrain in satellite images, which bear much less resemblance to anything in ImageNet.

That last one sounds pretty similar to your second sentence. Given what we know about transfer learning and CNN's, if we had a massive boat dataset, I bet it could be re-purposed to do pretty well at cat/dog.


> Given what we know about transfer learning and CNN's, if we had a massive boat dataset, I bet it could be re-purposed to do pretty well at cat/dog.

That's worth proving / disproving.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: