Using pre-trained deep learning model as feature extractor is a proven way to improve classification accuracy. One of the famous model is Oxford’s VGG16, which is trained using million images to recognize 1,000 classes ranging from animals, vehicles and other stuffs.
Now, to use VGG16 as part of another neural network is relatively easy, especially if you are using Keras. You can just remove the top layers (the fully-connected layers, used as classifier) and take the output (in tensors/multidimensional matrix), pass it as an input to your model as shown by some nice examples here.
But what if we want to use non-neural network classifiers which is not provided by Keras?
One of the easiest solution that I found is by appending
Flatten layer on top of stripped VGG16 (no top) then call predict on the dataset to receive a numpy array of flat (1D) features:
from keras.applications.vgg16 import VGG16 from keras.models import Model from keras.layers import Input, Flatten # VGG16 standard input shape EXPECTED_DIM = (224, 224, 3) vgg16 = VGG16(weights='imagenet', include_top=False) input = Input(shape=EXPECTED_DIM, name='input') output = vgg16(input) x = Flatten(name='flatten')(output) extractor = Model(inputs=input, outputs=x) # dataset is a numpy array of tensor shaped EXPECTED_DIM # features will be a numpy array of (dataset_rows, 25088) features = extractor.predict(dataset)
Then you can pass the features (and the labels) to classifier from other library such as the famous scikit-learn easily. You can also use the same method from other pre-trained models generously provided by Keras.