Transfer Learning with MobileNet-v2 💡

Photo by Eliott Reyna on Unsplash

In this blog we’ll talk about one of the most basic techniques to train deep learning models viz. Transfer Learning. Transfer learning allows us to build good Deep Learning models faster and with a better accuracy by using previously pre-trained models.

Read the blog to find more about Transfer Learning.

What is Transfer Learning ?

  • In transfer learning, we use a pre-trained model which is trained on a large and general enough dataset to serve as a generic model for our needs.
  • We can use these pre-trained models without having to train a model from scratch on a large dataset.

Basic Idea behind Transfer Learning

Figure representing the Transfer Learning Idea

General Workflow for using a Pre-trained Model:

  1. Examine and understand the data.
  2. Build an input pipeline for the model.
  3. Compose a. Loading the pre-trained model and pre-trained weights. b. Stack the classification layers on top.
  4. Train your model on a GPU.
  5. Evaluate the results.
Photo by Ismail Salad Osman Hajji dirir on Unsplash

Advantages of using Pre-Trained model for feature extraction:

  1. When working with a small dataset, we can take advantage of features learned by a model trained on a larger dataset in the same domain.
  2. This is done by instantiating the pre-trained model and adding a fully-connected classifier on top.
  3. The pre-trained model is ‘frozen’ and only the weights of the classifier get updated during training.
  4. This in turn helps us achieve better accuracy for our model, even if we have a small dataset and perform only fewer computations for training.

Implementation Code for Transfer Learning

Find the Jupyter Notebook with Code for Transfer Learning here. Compare your code with my comments on it for further understanding.

Here we have used MobileNet V2 model for training our model.

MobileNet V2

  • MobileNet V2 is a model developed by Google.
  • It is trained on the ImageNet dataset, a large dataset of 1.4M images and 1000 classes.
Photo by Element5 Digital on Unsplash

Using the “Bottleneck Layer” for Feature Extraction

  • Here in the cats_vs_dogs dataset we use the very last layer before the flatten operation for feature extraction. This layer is called a ‘bottleneck layer’.
  • The bottleneck layer features retain more generality as compared to the final/top layer.
  • Here we set the include_top=False, so that we load a model that doesn’t include the classification layers at the top.

Accuracy and loss of model

  • The model has a initial Accuracy of 0.54 and initial loss of 0.63 for the Validation set before training.
  • After training the model on Train set, it has an accuracy of 0.9501 and loss of 0.1020.

Link to Dockerfile: https://hub.docker.com/repository/docker/afrozchakure/tl_tensorflow

Feel free to checkout the code and also run the docker Image at your local machine.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Afroz Chakure

Afroz Chakure

300k+ Views on Medium | 4xTop Writer | Technology, Productivity, Books and Life | Linkedin: linkedin.com/in/afroz-chakure-489780168 | InteractiveGeneration.tech