Tensorflow Vs. Keras: Comparison by building a model for image classification

Written by dataturks | Published 2020/01/23
Tech Story Tags: machine-learning | tensorflow-vs-keras | tensorflow | keras | image-classification | good-company | image-recognition-in-photos | face-recognition

TLDR Tensorflow is the most used library to develop models in deep learning. It has been the best ever library which has been completely opted by many geeks in their daily experiments. I’ll help you create a powerful image classifier using tensorflow. I trained it for 4000 steps on a GCP Tesla with a. GCP. It gave a. 91% accuracy of 91% in 10 min. The best feature which this tool provides is, if we have a unstructured dataset with all the images in a single folder. By manually labelling it with the classes, you can. download a. json file which has all the details of the image with the class embedded in it.via the TL;DR App

Yes , as the title says , it has been very usual talk among data-scientists (even you!) where a few say , TensorFlow is better and some say Keras is way good! Let’s see how this thing actually works out in practice in the case of image classification.
Before that let’s introduce these two terms Keras and Tensorflow and help you build a powerful image classifier within 10 min!

Tensorflow:

Tensorflow is the most used library to develop models in deep learning. It has been the best ever library which has been completely opted by many geeks in their daily experiments . Could you imagine if I say that Google has put Tensor Processing Units (TPU) just to deal with tensors ? Yes, they have. They have put a separate class of instances called TPU which has the most power driven computational power to handle the deep learning models of tensorflow.

Time to BUILD IT!

I’ll now help you create a powerful image classifier using tensorflow. Wait! what is a classifier? It’s just a simple question you throw to your tensorflow code asking whether the given image is a rose or a tulip. So , first things first.Let us install tensorflow on the machine. There are two versions proivided by the official documentation i.e., CPU and the GPU version. For CPU version :
pip install tensorflow
And please note , I am writing this blog after experimenting on a GPU and NOT CPU. GPU installation is neatly given here.
Now , let us take the Google’s Tensorflow for poets experiment to train a model. This repository of Google has amazing scripts for easy experiments on images. It is very much concise and sufficient for our purpose. Remember the word powerful which I used before? Yes , that word comes into action when we use something called transfer learning. Transfer learning is a powerful way of using the pre-trained models which have been trained for days or might be weeks and then changing the final layer to adjust to our own set of classes.
Inception V3 is a very good model which has been ranked 2nd in 2015 ImageNet Challenge for image classification. It has been mentioned as the best network for transfer learning for datasets with less number of images per class.
(Inception V3)
Now clone the git repository:
git clone https://github.com/googlecodelabs/tensorflow-for-poets-2
cd tensorflow-for-poets-2
Now , you get to choose your images . All you have to do is put the dataset folder in the below fashion.
 — Dataset folder -
       class1/
           — image1
           — image2
       class2/
           — image1
           — image2
(FLOWER DATA)
It should look something like above (Ignore the image.py). I have got the above flower_photos folder by:
curl http://download.tensorflow.org/example_images/flower_photos.tgz | tar xz -C tf_files

Creating the Dataset

You can use whatever images you’d like. The more the better (aim for a few thousand). Separate them by categories as done above, and make sure they are in a folder called
tf_files
.
You can download pre-exiting datasets of various use cases like cancer detection to characters in Game of Thrones. Here is various image classification datasets.
Or if you have your unique use case, you can create your very own dataset for it. You can download images from the web and to make a big dataset in no time, use an annotation tool like Dataturks, where you upload the images and tag images manually in a ziffy. Better yet, the output from Dataturks can be easily used to building the
tf_files
.
(Building dataset using Dataturks)
I found a great plugin that enables batch image downloading on Google Chrome — this + Dataturks will make building training data a cakewalk. Linked here.
You can try doing this with the image_classification tool of dataturks here. Here the best feature which this tool provides is , if we have a unstructured dataset with all the images in a single folder. By manually labelling it with the classes , you can download a json file which has all the details of the image with the class embedded in it.Then use the scripts given there for keras and tensorflow:
-------> for tensorflow
python3 tensorflow_json_parser.py — json_file “flower.json” — dataset_path “Dataset5/”
-------> for keras
python3 keras_json_parser.py --json_file "flower.json" --dataset_path "Dataset5/" --train_percentage 80 --validation_percentage 20
Training
Now it’s time to train the model. In tensorflow-for-poets-2 folder , there is folder called scripts which has everything required for re-training of a model. The retrain.py has a special way of cropping and scaling the images which is too cool.
Then use the following command to train where the options name itself describe the required paths to train:
python3 -m scripts.retrain \ 
 --bottleneck_dir=tf_files/bottlenecks \ 
 --model_dir=tf_files/models/inception \
 --output_graph=tf_files/retrained_graph.pb \
 --output_labels=tf_files/retrained_labels.txt \
 --image_dir=tf_files/flower_photos
This downloads the inception model and trains the last layer accordingly using the training folder and the arguments given. I trained it for 4000 steps on a GCP instance with 12GB Nvidia Tesla k80 and 7GB Vram.
The training has been done with 80–20 , test- train split and we can see above , it gave a test_accuracy of 91% . Now its time to test! We have a .pb file in tf_files/ which can be used to test . The following changes have been added to the label_image.py
from PIL import Image,ImageDraw,ImageFont
results = results.tolist()
image = Image.open(file_name)
fonttype = ImageFont.truetype(“/usr/share/fonts/truetype/dejav/DejaVuSans.ttf”,18)
 
draw = ImageDraw.Draw(image)
draw.text(xy=(5,5),text=str(labels[results.index(max(results))])+”:”+str(max(results)),fill = (255,255,255,128),font = fonttype)
image.show()
image.save(file_name.split(“.”)[0]+”1"+”.jpg”)
The above code will help us draw the accuracy on the image being tested and saves it .confidence percentages fora rodom image shown below
Few outputs of testing are shown:
(a collage of few outputs comprising of all classes)
As we have seen the results were really promising enough for the task stated.

KERAS:

Keras is a high level API built on TensorFlow (and can be used on top of Theano too). It is more user-friendly and easy to use as compared to Tensorflow. If we were a newbie to all this deep learning and wanted to write a new model from scratch, then Keras is what I would suggest for its ease in both readability and writability.It can be installed with:
pip install keras
and even this thing is wrap over tensorflow, so again the CPU v/s GPU compatibility variations will apply here too.
Since , we have to carry out the same task of classifying flowers using transfer learning with inception model , I’ve seen that Keras loads the model in a standard format like how the APIs are written.
from keras.applications.inception_v3 import preprocess_input
Keras has a standard format of loading the dataset i.e., instead of giving the folders directly within a dataset folder , we divide the train and test data manually and arrange them in the following manner. I have used the same dataset which I downloaded in the tensorflow section and made few changes as directed below.
 — Dataset folder -
  — train/ 
       class1/
          — image1
          — image2
       class2/
          — image1
          — image2
  — test/ 
      class1/
          — image1
          — image2
      class2/
          — image1
          — image2
It should look like something below:
and followed by that train and test should have folders as shown below:
(TRAIN FOLDER)
As, we are now done with the set up of the dataset , it’s time for training ! I have written down a small piece of code to do the training which goes below:
import os
import sys
import glob
import argparse
import matplotlib.pyplot as plt
from keras import __version__
from keras.applications.inception_v3 import InceptionV3, preprocess_input
from keras.models import Model
from keras.layers import Dense, GlobalAveragePooling2D
from keras.preprocessing.image import ImageDataGenerator
from keras.optimizers import SGD
IM_WIDTH, IM_HEIGHT = 299, 299 #fixed size for InceptionV3
NB_EPOCHS = 3
BAT_SIZE = 32
FC_SIZE = 1024
NB_IV3_LAYERS_TO_FREEZE = 172
def get_nb_files(directory):
 """Get number of files by searching directory recursively"""
 if not os.path.exists(directory):
 return 0
 cnt = 0
 for r, dirs, files in os.walk(directory):
 for dr in dirs:
 cnt += len(glob.glob(os.path.join(r, dr + "/*")))
 return cnt
def setup_to_transfer_learn(model, base_model):
 """Freeze all layers and compile the model"""
 for layer in base_model.layers:
 layer.trainable = False
 model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])
def add_new_last_layer(base_model, nb_classes):
 """Add last layer to the convnet
 Args:
 base_model: keras model excluding top
 nb_classes: # of classes
 Returns:
 new keras model with last layer
 """
 x = base_model.output
 x = GlobalAveragePooling2D()(x)
 x = Dense(FC_SIZE, activation='relu')(x) #new FC layer, random init
 predictions = Dense(nb_classes, activation='softmax')(x) #new softmax layer
 model = Model(input=base_model.input, output=predictions)
 return model
def setup_to_finetune(model):
 """Freeze the bottom NB_IV3_LAYERS and retrain the remaining top layers.
 note: NB_IV3_LAYERS corresponds to the top 2 inception blocks in the inceptionv3 arch
 Args:
 model: keras model
 """
 for layer in model.layers[:NB_IV3_LAYERS_TO_FREEZE]:
 layer.trainable = False
 for layer in model.layers[NB_IV3_LAYERS_TO_FREEZE:]:
 layer.trainable = True
 model.compile(optimizer=SGD(lr=0.0001, momentum=0.9), loss='categorical_crossentropy', metrics=['accuracy'])
def train(args):
 """Use transfer learning and fine-tuning to train a network on a new dataset"""
 nb_train_samples = get_nb_files(args.train_dir)
 nb_classes = len(glob.glob(args.train_dir + "/*"))
 nb_val_samples = get_nb_files(args.val_dir)
 nb_epoch = int(args.nb_epoch)
 batch_size = int(args.batch_size)
# data prep
 train_datagen = ImageDataGenerator(
 preprocessing_function=preprocess_input,
 rotation_range=30,
 width_shift_range=0.2,
 height_shift_range=0.2,
 shear_range=0.2,
 zoom_range=0.2,
 horizontal_flip=True
 )
 test_datagen = ImageDataGenerator(
 preprocessing_function=preprocess_input,
 rotation_range=30,
 width_shift_range=0.2,
 height_shift_range=0.2,
 shear_range=0.2,
 zoom_range=0.2,
 horizontal_flip=True
 )
train_generator = train_datagen.flow_from_directory(
 args.train_dir,
 target_size=(IM_WIDTH, IM_HEIGHT),
 batch_size=batch_size,
 )
validation_generator = test_datagen.flow_from_directory(
 args.val_dir,
 target_size=(IM_WIDTH, IM_HEIGHT),
 batch_size=batch_size,
 )
# setup model
 base_model = InceptionV3(weights='imagenet', include_top=False) #include_top=False excludes final FC layer
 model = add_new_last_layer(base_model, nb_classes)
# transfer learning
 setup_to_transfer_learn(model, base_model)
history_tl = model.fit_generator(
 train_generator,
 nb_epoch=nb_epoch,
 samples_per_epoch=nb_train_samples,
 validation_data=validation_generator,
 nb_val_samples=nb_val_samples,
 class_weight='auto')
# fine-tuning
 setup_to_finetune(model)
history_ft = model.fit_generator(
 train_generator,
 samples_per_epoch=nb_train_samples,
 nb_epoch=nb_epoch,
 validation_data=validation_generator,
 nb_val_samples=nb_val_samples,
 class_weight='auto')
model.save(args.output_model_file)
if args.plot:
 plot_training(history_ft)
def plot_training(history):
 acc = history.history['acc']
 val_acc = history.history['val_acc']
 loss = history.history['loss']
 val_loss = history.history['val_loss']
 epochs = range(len(acc))
plt.plot(epochs, acc, 'r.')
 plt.plot(epochs, val_acc, 'r')
 plt.title('Training and validation accuracy')
plt.figure()
 plt.plot(epochs, loss, 'r.')
 plt.plot(epochs, val_loss, 'r-')
 plt.title('Training and validation loss')
 plt.show()
if __name__=="__main__":
 a = argparse.ArgumentParser()
 a.add_argument(" - train_dir")
 a.add_argument(" - val_dir")
 a.add_argument(" - nb_epoch", default=NB_EPOCHS)
 a.add_argument(" - batch_size", default=BAT_SIZE)
 a.add_argument(" - output_model_file", default="inceptionv3-ft.model")
 a.add_argument(" - plot", action="store_true")
args = a.parse_args()
 if args.train_dir is None or args.val_dir is None:
 a.print_help()
 sys.exit(1)
if (not os.path.exists(args.train_dir)) or (not os.path.exists(args.val_dir)):
 print("directories do not exist")
 sys.exit(1)
 train(args)
and this code is neatly written and can be easily understood with the arguments being passed to the below command:
python3 inception_train.py 
 — train_dir flower_photos/train \
 — val_dir flower_photos/validation \ 
 — nb_epoch 50 \ 
 — batch_size 10 \ 
 — output_model_file inception_yo1.model
and the training on my GPU took around 1 minute per epoch with 292 steps per epoch and was trained for 50 epochs (which is very much more ! ) with a batch size of 10 and a 80–20 data split.
Whoop! we are done with training and achieved test_accuracy of ~91% and a loss of 0.38. The model has been saved as a inception.model file which can be loaded again and tested . To do that , another script has been written along with plotting the predicted class on the image and saving it. The testing script goes as below:
import sys
import argparse
import numpy as np
from PIL import Image
import requests
from io import BytesIO
import matplotlib.pyplot as plt
from PIL import Image,ImageDraw,ImageFont
from keras.preprocessing import image
from keras.models import load_model
from keras.applications.inception_v3 import preprocess_input
target_size = (229, 229) #fixed size for InceptionV3 architecture
def predict(model, img, target_size):
 """Run model prediction on image
 Args:
 model: keras model
 img: PIL format image
 target_size: (w,h) tuple
 Returns:
 list of predicted labels and their probabilities
 """
 if img.size != target_size:
 img = img.resize(target_size)
x = image.img_to_array(img)
 x = np.expand_dims(x, axis=0)
 x = preprocess_input(x)
 preds = model.predict(x)
 return preds[0]
def plot_preds(image, preds):
 """Displays image and the top-n predicted probabilities in a bar graph
 Args:
 image: PIL image
 preds: list of predicted labels and their probabilities
 """
 plt.imshow(image)
 plt.axis('off')
plt.figure()
 labels = ("daisy", "dandelion","roses","sunflower","tulips")
 plt.barh([0, 1,2,3,4], preds, alpha=0.5)
 plt.yticks([0, 1,2,3,4], labels)
 plt.xlabel('Probability')
 plt.xlim(0,1.01)
 plt.tight_layout()
 plt.show()
if __name__=="__main__":
 a = argparse.ArgumentParser()
 a.add_argument(" - image", help="path to image")
 a.add_argument(" - image_url", help="url to image")
 a.add_argument(" - model")
 args = a.parse_args()
 
 if args.image is None and args.image_url is None:
 a.print_help()
 sys.exit(1)
model = load_model(args.model)
 model.fit()
 if args.image is not None:
 labels = ("daisy", "dandelion","roses","sunflower","tulips")
 image1 = Image.open(args.image)
 preds = predict(model, image1, target_size) 
 print(preds)
 preds = preds.tolist()
 plot_preds(image1, preds)
 fonttype = ImageFont.truetype("/usr/share/fonts/truetype/dejavu/DejaVuSans.ttf",18)
 draw = ImageDraw.Draw(image1)
 draw.text(xy=(5,5),text = str(labels[preds.index(max(preds))])+":"+str(max(preds)),fill = (255,255,255,128),font = fonttype)
 image1.show()
 image1.save((args.image).split(".")[0]+"1"+".jpg")
(inception_test.py)
This script can be tested as:
python3 -m scripts.label_image — graph=tf_files/retrained_graph.pb — image=rose.jpeg
The predicted confidence percentages over all classes is outputted like:
[daisy,dandelion,roses,sunflower,tulip]
and below are the few outputs with graph:
(tested images with their probability graphs)
Finally! you have learnt how to build a powerful classifier using both Keras and tensorflow. But , which one is best is still a question in our heads! So , let us do a comparative study only based on this classification task as of now.
The whole tain and test code of keras along with the changed scripts of tensorflow are available in my github here.
Prototyping:
If you really want to write a code quickly and build a model , then Keras is a go. We can build complex models within minutes! The
Model
and the
Sequential
APIs are so powerful that they wont even give you a sense that you are the building powerful models due to the ease in using them.
model = Sequential()
model.add(Dense(32, activation='relu', input_dim=100))
model.add(Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
              loss='binary_crossentropy',
              metrics=['accuracy'])

# Generate dummy data
import numpy as np
data = np.random.random((1000, 100))
labels = np.random.randint(2, size=(1000, 1))

# Train the model, iterating on the data in batches of 32 samples
model.fit(data, labels, epochs=10, batch_size=32)
That’s it, a model is ready! Even transfer learning is easy to code in Keras than in tensorflow. Tensorflow is too tough to code from scratch until you are a sticky coder.
Scratch Coding and flexibility:
As tensorflow is a low-level library when compared to Keras , many new functions can be implemented in a better way in tensorflow than in Keras for example , any activation fucntion etc… And also the fine-tuning and tweaking of the model is very flexible in tensorflow than in Keras due to much more parameters being available.
Training time and processing power:
The above models were trained on the same dataset , we see that Keras takes loner time to train than tensorflow . Tensorflow finished the training of 4000 steps in 15 minutes where as Keras took around 2 hours for 50 epochs . May be we cannot compare steps with epochs , but of you see in this case , both gave a test accuracy of 91% which is comparable and we can depict that keras trains a bit slower than tensorflow. Apart from this , it makes sense because of tensorflow being a low level library.
Extra features provided:
Tensorflow has a inbuilt debugger which can debug during the training as well as generating the graphs.
(TensorFlow Debugger snapshot. Source: TensorFlow documentation)
Tensorflow even supports threads and queues to train the heavy tensors asynchronously! This provides TPUs a better and much faster processing speeds.Sample code for threads is shown below:
# Create the graph, etc.
init_op = tf.global_variables_initializer()
# Create a session for running operations in the Graph.
sess = tf.Session()
# Initialize the variables (like the epoch counter).
sess.run(init_op)
# Start input enqueue threads.
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(sess=sess, coord=coord)
try:
    while not coord.should_stop():
        # Run training steps or whatever
        sess.run(train_op)
except tf.errors.OutOfRangeError:
    print('Done training -- epoch limit reached')
finally:
    # When done, ask the threads to stop.
    coord.request_stop()
# Wait for threads to finish.
coord.join(threads)
sess.close()
Monitoring and Control:
According to my experience in dee learning , I feel tensorflow is highly suitable for many cases though it is little tough. For example , we can monitor each and everything very easily such as controlling the weights , gradients of your network. We can choose which step should be trained and which should not . This is not that feasible in Keras.The below given line does that magic!
step = tf.Variable(1, trainable=False, dtype=tf.int32)

Conclusion

Anyways, Keras is going to be integrated in tensorflow shortly! So, why go pythonic?(Keras is pythonic ) . Spend some time and get used to tensorflow is what I suggest . The classification problem above , if you have followed the blog and done the steps accordingly , then you will feel that Keras is little painful and patience killer than tensorflow in many aspects. So , try using other classes and try training classifers for applications like fake note detection etc…
Hope this blog would have given you a better insight to what to use when !
I would love to hear any suggestions or queries. Please write to me at sameer.gadicherla@dataturks.com

Published by HackerNoon on 2020/01/23