Transfer Learning with MobileNet-v2 š”
Learn the basics of Transfer Learning with code implementation

In this blog weāll talk about one of the most basic techniques to train deep learning models viz. Transfer Learning. Transfer learning allows us to build good Deep Learning models faster and with a better accuracy by using previously pre-trained models.
Read the blog to find more about Transfer Learning.
What is Transfer Learning ?
- In transfer learning, we use a pre-trained model which is trained on a large and general enough dataset to serve as a generic model for our needs.
- We can use these pre-trained models without having to train a model from scratch on a large dataset.
Basic Idea behind Transfer Learning

General Workflow for using a Pre-trained Model:
- Examine and understand the data.
- Build an input pipeline for the model.
- Compose a. Loading the pre-trained model and pre-trained weights. b. Stack the classification layers on top.
- Train your model on a GPU.
- Evaluate the results.

Advantages of using Pre-Trained model for feature extraction:
- When working with a small dataset, we can take advantage of features learned by a model trained on a larger dataset in the same domain.
- This is done by instantiating the pre-trained model and adding a fully-connected classifier on top.
- The pre-trained model is āfrozenā and only the weights of the classifier get updated during training.
- This in turn helps us achieve better accuracy for our model, even if we have a small dataset and perform only fewer computations for training.
Implementation Code for Transfer Learning
Find the Jupyter Notebook with Code for Transfer Learning here. Compare your code with my comments on it for further understanding.
Here we have used MobileNet V2 model for training our model.
MobileNet V2
- MobileNet V2 is a model developed by Google.
- It is trained on the ImageNet dataset, a large dataset of 1.4M images and 1000 classes.

Using the āBottleneck Layerā for Feature Extraction
- Here in the cats_vs_dogs dataset we use the very last layer before the flatten operation for feature extraction. This layer is called a ābottleneck layerā.
- The bottleneck layer features retain more generality as compared to the final/top layer.
- Here we set the include_top=False, so that we load a model that doesnāt include the classification layers at the top.
Accuracy and loss of model
- The model has a initial Accuracy of 0.54 and initial loss of 0.63 for the Validation set before training.
- After training the model on Train set, it has an accuracy of 0.9501 and loss of 0.1020.
Link to Dockerfile: https://hub.docker.com/repository/docker/afrozchakure/tl_tensorflow
Feel free to checkout the code and also run the docker Image at your local machine.