Practical Implementation of Transfer Learning

8 min read

In my previous article, I had briefly introduced concept of transfer learning. if you want to know these concepts, make visit on that post through this link:

Before going to start let me tell you what is transfer learning?

Transfer learning generally refers to a process where a model trained on one problem is used in some way on a second related problem. In transfer learning, the knowledge of an already trained model is applied to a different but related problem. For example, knowledge gained while learning to recognize Apple could apply when trying to recognize other fruits. a model developed for a task is reused as the starting point for a model on a second task.

In deep learning, transfer learning is a technique whereby a neural network model is first trained on a problem similar to the problem that is being solved. One or more layers from the trained model are then used in a new model trained on the problem of interest. The weights in re-used layers may be used as the starting point for the training process and adapted in response to the new problem. This usage treats transfer learning as a type of weight initialization scheme. This may be useful when the first related problem has a lot more labeled data than the problem of interest and the similarity in the structure of the problem may be useful in both contexts.

In this article I am going to show you the practical implementation of transfer learning:- how to load different models and example of using Pre-Trained Model as Feature Extractor Preprocessor and Pre-Trained Model as Feature Extractor in Model. so let’s move on to know the topics which you will know after reading this article:

How To Load Models

Keras provides many pre-trained models that can be downloaded and used as the basis for related computer vision tasks.

Models are available via the Applications API, and include functions to load a model with or without the pre-trained weights, and prepare data in a way that a given model may expect. first time a pre-trained model is loaded, Keras will download the required model weights, which may take some time. Weights are stored in the .keras/models/ directory under your home directory and will be loaded from this location the next time that they are used.

When loading a model with the “include_top” argument set to False, in this case, the fully-connected output layers of the model used to make predictions is not loaded, allowing a new output layer to be added and trained. For example:

# load model without output layer
model = VGG16(include_top=False)

When the “include_top” argument is False, the “input_tensor” argument must be specified, allowing the expected fixed-sized input of the model to be changed. For example:

# load model and specify a new input shape for images
input = Input(shape=(640, 480, 3))
model = VGG16(include_top=False, input_tensor=input)

A model without a top will output activations from the last convolutional or pooling layer summarizing these activations for thier use in a classifier or as a feature vector representation of input is to add a global pooling layer, such as a max global pooling or average global pooling. Keras provides this capability directly via the ‘pooling‘ argument that can be set to ‘avg‘ or ‘max‘. The result is a vector that can be used as a feature descriptor for an input. For example:

# load model and specify a new input shape for images and avg pooling output
input = Input(shape=(640, 480, 3))
model = VGG16(include_top=False, input_tensor=input, pooling='avg')

Using the preprocess_input() function, images can be prepared for a given model; e.g., pixel scaling is performed in a way that was performed to images in the training dataset when the model was developed. For example:

# prepare an image
from keras.applications.vgg16 import preprocess_input
images = Path-to-image
prepared_images = preprocess_input(images)

We may wish to use a model architecture on our dataset, but not use the pre-trained weights, and instead initialize the model with random weights and train the model from scratch. this can be achieved by setting the ‘weights‘ argument to None instead of the default ‘imagenet‘. Additionally, the ‘classes‘ argument can be set to define the number of classes in your dataset, which will then be configured for you in the output layer of the model. For example:

# define a new model with random weights and 10 classes
input = Input(shape=(640, 480, 3))
model = VGG16(weights=None, input_tensor=input, classes=10)

Example of loading Pre-Trained models

Now that we are familiar with the API, let’s take a look at loading three models using the Keras Applications API. these three more popular models are as follows:

VGG (e.g. VGG16 or VGG19).

GoogLeNet (e.g. InceptionV3).

Residual Network (e.g. ResNet50).

They were examples that introduced specific architectural innovations, namely consistent and repeating structures (VGG), inception modules (GoogLeNet), and residual modules (ResNet).

How Load the VGG16 Pre-trained Model

The VGG16 model was developed by the Visual Graphics Group (VGG) at Oxford and By default, the model expects color input images to be rescaled to the size of 224×224 squares.

The model can be loaded as follows:

# loading the vgg16 model
from keras.applications.vgg16 import VGG16
# load model
model = VGG16()
# summarize the model

Running the example will load the VGG16 model and download the model weights if required. in this case, the model architecture is summarized to confirm that it was loaded correctly.

Layer (type)                 Output Shape              Param #
input_1 (InputLayer)         (None, 224, 224, 3)       0
block1_conv1 (Conv2D)        (None, 224, 224, 64)      1792
block1_conv2 (Conv2D)        (None, 224, 224, 64)      36928
block1_pool (MaxPooling2D)   (None, 112, 112, 64)      0
block2_conv1 (Conv2D)        (None, 112, 112, 128)     73856
block2_conv2 (Conv2D)        (None, 112, 112, 128)     147584
block2_pool (MaxPooling2D)   (None, 56, 56, 128)       0
block3_conv1 (Conv2D)        (None, 56, 56, 256)       295168
block3_conv2 (Conv2D)        (None, 56, 56, 256)       590080
block3_conv3 (Conv2D)        (None, 56, 56, 256)       590080
block3_pool (MaxPooling2D)   (None, 28, 28, 256)       0
block4_conv1 (Conv2D)        (None, 28, 28, 512)       1180160
block4_conv2 (Conv2D)        (None, 28, 28, 512)       2359808
block4_conv3 (Conv2D)        (None, 28, 28, 512)       2359808
block4_pool (MaxPooling2D)   (None, 14, 14, 512)       0
block5_conv1 (Conv2D)        (None, 14, 14, 512)       2359808
block5_conv2 (Conv2D)        (None, 14, 14, 512)       2359808
block5_conv3 (Conv2D)        (None, 14, 14, 512)       2359808
block5_pool (MaxPooling2D)   (None, 7, 7, 512)         0
flatten (Flatten)            (None, 25088)             0
fc1 (Dense)                  (None, 4096)              102764544
fc2 (Dense)                  (None, 4096)              16781312
predictions (Dense)          (None, 1000)              4097000
Total params: 138,357,544
Trainable params: 138,357,544
Non-trainable params: 0
Load the InceptionV3 Pre-Trained Model

The InceptionV3 is the third iteration of the inception architecture, first developed for the GoogLeNet model. This model was developed by researchers at Google. The model expects color images to have the square shape 299×299.

The model can be loaded as follows:

# loading the inception v3 model
from keras.applications.inception_v3 import InceptionV3
# load model
model = InceptionV3()
# summarize the model

Running the example will load the model, downloading the weights if required, and then summarize the model architecture to confirm it was loaded correctly.

Load the ResNet50 Pre-trained Model

The Residual Network, or ResNet for short, is a model that makes use of the residual module involving shortcut connections. It was developed by researchers at Microsoft. The model expects color images to have the square shape 224×224.

# loading the resnet50 model
from keras.applications.resnet50 import ResNet50
# load model
model = ResNet50()
# summarize the model

Running the example will load the model, downloading the weights if required, and then summarize the model architecture to confirm it was loaded correctly.

Examples of Using Pre-Trained Models

Now that we are familiar with how to load pre-trained models in Keras, let’s look at some examples of how they might be used in practice.

In this example, we will work with the Xception model as it is a relatively simple model architecture to understand.

we also need a dataset or image to work with this example:

Pre-Trained Model as Feature Extractor

the pre-trained model may be used as a standalone program to extract features from new photos. the extracted features of a photo may be a vector of numbers that the model will use to describe the specific features in a photo. These features can then be used as input in the development of a new model.

The last few layers of the model are fully connected layers prior to the output layer. These layers will provide a complex set of features to describe a given input image and may provide useful input when training a new model for task.

We will load the model with the classifier output part of the model, but manually remove the final output layer. This means that the second last fully connected layer with 4,096 nodes will be the new output layer.

#Importing required libraries

from pickle import dump, load import numpy as np import os from keras.applications.xception import Xception, preprocess_input import numpy as np from PIL import Image from keras.models import Model
#function for extract features and load model
def extract_features(directory):
        model = Xception( include_top=False, pooling='avg' )
        features = {}
        for img in tqdm(os.listdir(directory)):
            filename = directory + "/" + img
            image =
            image = image.resize((299,299))
            image = np.expand_dims(image, axis=0)
            #image = preprocess_input(image)
            image = image/127.5
            image = image - 1.0
            feature = model.predict(image)
            features[img] = feature
        return features
#2048 feature vector

Choose your Reaction!
Leave a Comment