Deep Dream with TensorFlow: A Practical guide to build your first Deep Dream Experience

Written by naveenmanwani | Published 2018/12/27
Tech Story Tags: machine-learning | tensorflow | deepdream | artificial-intelligence | deep-dream-experience

TLDRvia the TL;DR App

Imagination is more important than knowledge. For knowledge is limited, whereas imagination embraces the entire world, stimulating progress, giving birth to evolution.”

Whenever the machine learning engineers, Deep learning professionals gather at some meetup or conference the most common applications of Deep Learning they discuss ranges from Object Detection, Face Recognition, Natural Language Processing and Speech Recognition mainly due to Self-driving cars, Amazon-Alexa or Chatbots but there are other types of applications that are different from these standard applications which are creating enormous amount of buzz not only in the field of Artificial Intelligence but in the field of Art too.

One such application which has empower the artist who in turn are augmenting our creative affordances, and expanding the space of what we can imagine is “Deep Dream” .

Deep Dream is a computer vision program created by Google engineer Alex Mordvintsev which uses a convolutional neural network to find and enhance patterns in images via algorithmic pareidolia, thus creating a Dream-like hallucinogenic appearance in the deliberately over-processed images.

Google’s program popularized the term (Deep) “Dreaming” to refer to the generation of images that produce desired activations in a trained deep network, and the term now refers to a collection of related approaches.

From Theory to Practical

Here comes my favorite part, After educating yourself about the Google Deep Dream, it’s time to switch from a reader mode to a coder mode because from this point onward I’ll only talk about the code which is equally important as knowing the concepts behind any Deep Learning application.

Follow this step-by-step practical guide for creating your first Deep Dream experiences, but before starting this coding journey with me, have a look over my Deep Dream images which are heavily psychedelic.

Disclaimer: Before starting this coding tutorial make sure you have two python files namely download.py and inception5h.py in one folder which you can get from my GitHub repository mentioned in the resource section, otherwise you’ll find yourself entangled with “No module found ” error which is certainly a pain

So, let’s get started

#This was developed using Python 3.6.3 (Anaconda)#Important library to import

%matplotlib inlineimport matplotlib.pyplot as pltimport tensorflow as tfimport numpy as npimport randomimport math

# Image manipulation.from PIL import Imagefrom scipy.ndimage.filters import gaussian_filter

Inception Model

The Inception 5h model is used because it is easier to work with: It takes input images of any size, and it seems to create prettier pictures than the Inception v3 model.

import inception5h

Download the data for the Inception model. It is 50 MB in size

inception5h.maybe_download()

Downloading Inception 5h Model ...Data has apparently already been downloaded and unpacked.

Load the Inception model so it is ready to be used.

model = inception5h.Inception5h()

The Inception 5h model has many layers that can be used for Deep Dreaming. but we will be only using 12 most commonly used layers for easy reference.

len(model.layer_tensors)

To know the different layers in the inception 5h model

def printTensors(pb_file):

# read pb into graph_defwith tf.gfile.GFile(pb_file, "rb") as f:graph_def = tf.GraphDef()graph_def.ParseFromString(f.read())

# import graph_defwith tf.Graph().as_default() as graph:tf.import_graph_def(graph_def)

# print operationsfor op in graph.get_operations():print(op.name)

printTensors("inception/5h/tensorflow_inception_graph.pb")

Helper-functions for image manipulation

This function loads an image and returns it as a numpy array of floating-points.

def load_image(filename):try:original = Image.open(filename)print("the size of the image is :")print(original.format,original.size)except:print ("Unable to load image")

return np.float32(original)

Save an image as a jpeg-file. The image is given as a numpy array with pixel-values between 0 and 255.

def save_image(image, filename):# Ensure the pixel-values are between 0 and 255.image = np.clip(image, 0.0, 255.0)

# Convert to bytes.  
image = image.astype(np.uint8)  
  
# Write the image-file in jpeg-format.  
with open(filename, 'wb') as file:  
    Image.fromarray(image).save(file, 'jpeg')

This function plots an image. Using matplotlib gives low-resolution images. Using PIL gives pretty pictures.

def plot_image(image):# Assume the pixel-values are scaled between 0 and 255.

if False:  
    # Convert the pixel-values to the range between 0.0 and 1.0  
    image = np.clip(image/255.0, 0.0, 1.0)  
      
    # Plot using matplotlib.  
    plt.imshow(image, interpolation='lanczos')  
    plt.show()  
else:  
    # Ensure the pixel-values are between 0 and 255.  
    image = np.clip(image, 0.0, 255.0)  
      
    # Convert pixels to bytes.  
    image = image.astype(np.uint8)

# Convert to a PIL-image and display it.display(Image.fromarray(image))

Normalize an image so its values are between 0.0 and 1.0. This is useful for plotting the gradient.

def normalize_image(x):# Get the min and max values for all pixels in the input.x_min = x.min()x_max = x.max()

# Normalize so all values are between 0.0 and 1.0x_norm = (x - x_min) / (x_max - x_min)

return x\_norm

This function plots the gradient after normalizing it

def plot_gradient(gradient):# Normalize the gradient so it is between 0.0 and 1.0gradient_normalized = normalize_image(gradient)

# Plot the normalized gradient.  
plt.imshow(gradient\_normalized, interpolation='bilinear')  
plt.show()

This function resizes an image. It can take a size-argument where you give it the exact pixel-size you want the image to be e.g. (100, 200). Or it can take a factor-argument where you give it the rescaling-factor you want to use e.g. 0.5 for halving the size of the image in each dimension.

This is implemented using PIL which is a bit lengthy because we are working on numpy arrays where the pixels are floating-point values. This is not supported by PIL so the image must be converted to 8-bit bytes while ensuring the pixel-values are within the proper limits. Then the image is resized and converted back to floating-point values.

def resize_image(image, size=None, factor=None):# If a rescaling-factor is provided then use it.if factor is not None:# Scale the numpy array's shape for height and width.size = np.array(image.shape[0:2]) * factor

    # The size is floating-point because it was scaled.  
    # PIL requires the size to be integers.  
    size = size.astype(int)  
else:  
    # Ensure the size has length 2.  
    size = size\[0:2\]  
  
# The height and width is reversed in numpy vs. PIL.  
size = tuple(reversed(size))

# Ensure the pixel-values are between 0 and 255.img = np.clip(image, 0.0, 255.0)

# Convert the pixels to 8-bit bytes.  
img = img.astype(np.uint8)  
  
# Create PIL-object from numpy array.  
img = Image.fromarray(img)  
  
# Resize the image.  
img\_resized = img.resize(size, Image.LANCZOS)  
  
# Convert 8-bit pixel values back to floating-point.  
img\_resized = np.float32(img\_resized)

return img_resized

DeepDream Algorithm

Gradient

The following helper-functions calculate the gradient of an input image for use in the DeepDream algorithm. The Inception 5h model can accept images of any size, but very large images may use many giga-bytes of RAM. In order to keep the RAM-usage low we will split the input image into smaller tiles and calculate the gradient for each of the tiles.

However, this may result in visible lines in the final images produced by the DeepDream algorithm. We therefore choose the tiles randomly so the locations of the tiles are always different. This makes the seams between the tiles invisible in the final DeepDream image.

This is a helper-function for determining an appropriate tile-size. The desired tile-size is e.g. 400x400 pixels, but the actual tile-size will depend on the image-dimensions.

def get_tile_size(num_pixels, tile_size=400):"""num_pixels is the number of pixels in a dimension of the image.tile_size is the desired tile-size."""

# How many times can we repeat a tile of the desired size.num_tiles = int(round(num_pixels / tile_size))

# Ensure that there is at least 1 tile.  
num\_tiles = max(1, num\_tiles)  
  
# The actual tile-size.  
actual\_tile\_size = math.ceil(num\_pixels / num\_tiles)  
  
return actual\_tile\_size

This helper-function computes the gradient for an input image. The image is split into tiles and the gradient is calculated for each tile. The tiles are chosen randomly to avoid visible seams / lines in the final DeepDream image.

def tiled_gradient(gradient, image, tile_size=400):# Allocate an array for the gradient of the entire image.grad = np.zeros_like(image)

# Number of pixels for the x- and y-axes.x_max, y_max, _ = image.shape

# Tile-size for the x-axis.x_tile_size = get_tile_size(num_pixels=x_max, tile_size=tile_size)# 1/4 of the tile-size.x_tile_size4 = x_tile_size // 4

# Tile-size for the y-axis.y_tile_size = get_tile_size(num_pixels=y_max, tile_size=tile_size)# 1/4 of the tile-sizey_tile_size4 = y_tile_size // 4

# Random start-position for the tiles on the x-axis.# The random value is between -3/4 and -1/4 of the tile-size.# This is so the border-tiles are at least 1/4 of the tile-size,# otherwise the tiles may be too small which creates noisy gradients.x_start = random.randint(-3*x_tile_size4, -x_tile_size4)

while x_start < x_max:# End-position for the current tile.x_end = x_start + x_tile_size

    # Ensure the tile's start- and end-positions are valid.  
    x\_start\_lim = max(x\_start, 0)  
    x\_end\_lim = min(x\_end, x\_max)

# Random start-position for the tiles on the y-axis.# The random value is between -3/4 and -1/4 of the tile-size.y_start = random.randint(-3*y_tile_size4, -y_tile_size4)

while y_start < y_max:# End-position for the current tile.y_end = y_start + y_tile_size

# Ensure the tile's start- and end-positions are valid.y_start_lim = max(y_start, 0)y_end_lim = min(y_end, y_max)

# Get the image-tile.img_tile = image[x_start_lim:x_end_lim,y_start_lim:y_end_lim, :]

# Create a feed-dict with the image-tile.feed_dict = model.create_feed_dict(image=img_tile)

# Use TensorFlow to calculate the gradient-value.g = session.run(gradient, feed_dict=feed_dict)

# Normalize the gradient for the tile. This is# necessary because the tiles may have very different# values. Normalizing gives a more coherent gradient.g /= (np.std(g) + 1e-8)

# Store the tile's gradient at the appropriate location.grad[x_start_lim:x_end_lim,y_start_lim:y_end_lim, :] = g

        # Advance the start-position for the y-axis.  
        y\_start = y\_end

# Advance the start-position for the x-axis.x_start = x_end

return grad

Optimize Image

This function is the main optimization-loop for the DeepDream algorithm. It calculates the gradient of the given layer of the Inception model with regard to the input image. The gradient is then added to the input image so the mean value of the layer-tensor is increased. This process is repeated a number of times and amplifies whatever patterns the Inception model sees in the input image.

def optimize_image(layer_tensor, image,num_iterations=10, step_size=3.0, tile_size=400,show_gradient=False):"""Use gradient ascent to optimize an image so it maximizes themean value of the given layer_tensor.

_Parameters:_  
_layer\_tensor: Reference to a tensor that will be maximized._  
_image: Input image used as the starting point._  
_num\_iterations: Number of optimization iterations to perform._  
_step\_size: Scale for each step of the gradient ascent._  
_tile\_size: Size of the tiles when calculating the gradient._  
_show\_gradient: Plot the gradient in each iteration._  
_"""_  

_\# Copy the image so we don't overwrite the original image._  
img = image.copy()  
  
print("Image before:")  
plot\_image(img)  

print("Processing image: ", end="")  

_\# Use TensorFlow to get the mathematical function for the_  
_\# gradient of the given layer-tensor with regard to the_  
_\# input image. This may cause TensorFlow to add the same_  
_\# math-expressions to the graph each time this function is called._  
_\# It may use a lot of RAM and could be moved outside the function._  
gradient = model.get\_gradient(layer\_tensor)  
  
**for** i **in** range(num\_iterations):  
    _\# Calculate the value of the gradient._  
    _\# This tells us how to change the image so as to_  
    _\# maximize the mean of the given layer-tensor._  
    grad = tiled\_gradient(gradient=gradient, image=img)  
      
    _\# Blur the gradient with different amounts and add_  
    _\# them together. The blur amount is also increased_  
    _\# during the optimization. This was found to give_  
    _\# nice, smooth images. You can try and change the formulas._  
    _\# The blur-amount is called sigma (0=no blur, 1=low blur, etc.)_  
    _\# We could call gaussian\_filter(grad, sigma=(sigma, sigma, 0.0))_  
    _\# which would not blur the colour-channel. This tends to_  
    _\# give psychadelic / pastel colours in the resulting images._  
    _\# When the colour-channel is also blurred the colours of the_  
    _\# input image are mostly retained in the output image._  
    sigma = (i \* 4.0) / num\_iterations + 0.5  
    grad\_smooth1 = gaussian\_filter(grad, sigma=sigma)  
    grad\_smooth2 = gaussian\_filter(grad, sigma=sigma\*2)  
    grad\_smooth3 = gaussian\_filter(grad, sigma=sigma\*0.5)  
    grad = (grad\_smooth1 + grad\_smooth2 + grad\_smooth3)  

    _\# Scale the step-size according to the gradient-values._  
    _\# This may not be necessary because the tiled-gradient_  
    _\# is already normalized._  
    step\_size\_scaled = step\_size / (np.std(grad) + 1e-8)  

    _\# Update the image by following the gradient._  
    img += grad \* step\_size\_scaled  

    **if** show\_gradient:  
        _\# Print statistics for the gradient._  
        msg = "Gradient min: **{0:>9.6f}**, max: **{1:>9.6f}**, stepsize: **{2:>9.2f}**"  
        print(msg.format(grad.min(), grad.max(), step\_size\_scaled))  

        _\# Plot the gradient._  
        plot\_gradient(grad)  
    **else**:  
        _\# Otherwise show a little progress-indicator._  
        print(". ", end="")  

print()  
print("Image after:")  
plot\_image(img)  
  
**return** img

Recursive Image Optimization

The Inception model was trained on fairly small images. The exact size is unclear but maybe 200–300 pixels in each dimension. If we use larger images such as 1920x1080 pixels then the optimize_image() function above will add many small patterns to the image.

This helper-function downscales the input image several times and runs each downscaled version through the optimize_image()function above. This results in larger patterns in the final image. It also speeds up the computation.

def recursive_optimize(layer_tensor, image,num_repeats=4, rescale_factor=0.7, blend=0.2,num_iterations=10, step_size=3.0,tile_size=400):"""Recursively blur and downscale the input image.Each downscaled image is run through the optimize_image()function to amplify the patterns that the Inception model sees.

Parameters:image: Input image used as the starting point.rescale_factor: Downscaling factor for the image.num_repeats: Number of times to downscale the image.blend: Factor for blending the original and processed images.

Parameters passed to optimize_image():layer_tensor: Reference to a tensor that will be maximized.num_iterations: Number of optimization iterations to perform.step_size: Scale for each step of the gradient ascent.tile_size: Size of the tiles when calculating the gradient."""

# Do a recursive step?if num_repeats>0:# Blur the input image to prevent artifacts when downscaling.# The blur amount is controlled by sigma. Note that the# colour-channel is not blurred as it would make the image gray.sigma = 0.5img_blur = gaussian_filter(image, sigma=(sigma, sigma, 0.0))

# Downscale the image.img_downscaled = resize_image(image=img_blur,factor=rescale_factor)

    # Recursive call to this function.  
    # Subtract one from num\_repeats and use the downscaled image.  
    img\_result = recursive\_optimize(layer\_tensor=layer\_tensor,  
                                    image=img\_downscaled,  
                                    num\_repeats=num\_repeats-1,  
                                    rescale\_factor=rescale\_factor,  
                                    blend=blend,  
                                    num\_iterations=num\_iterations,  
                                    step\_size=step\_size,  
                                    tile\_size=tile\_size)  
      
    # Upscale the resulting image back to its original size.  
    img\_upscaled = resize\_image(image=img\_result, size=image.shape)

# Blend the original and processed images.image = blend * image + (1.0 - blend) * img_upscaled

print("Recursive level:", num_repeats)

# Process the image using the DeepDream algorithm.img_result = optimize_image(layer_tensor=layer_tensor,image=image,num_iterations=num_iterations,step_size=step_size,tile_size=tile_size)

return img\_result

TensorFlow Session

We need a TensorFlow session to execute the graph. This is an interactive session so we can continue adding gradient functions to the computational graph.

session = tf.InteractiveSession(graph=model.graph)

It’s time to run the algorithm

#load the image which you want to processimage=load_image(filename='test_output/test_output_11.jpg')plot_image(image)

# the size of the image is :# JPEG (780, 1040)

Image 2 : That’s me few years back

First we need a reference to the tensor inside the Inception model which we will maximize in the DeepDream optimization algorithm. In this case we select the entire 3rd layer of the Inception model (layer index 2). It has 192 channels and we will try and maximize the average value across all these channels.

layer_tensor = model.layer_tensors[2]layer_tensor

# <tf.Tensor 'conv2d2:0' shape=(?, ?, ?, 192) dtype=float32>

Applying Deep Dream algorithm recursively.

img_result = recursive_optimize(layer_tensor=layer_tensor, image=image,num_iterations=10, step_size=3.0, rescale_factor=0.7,num_repeats=4, blend=0.2)

Image 3: After applying Deep Dream to my image

Now we will maximize a higher layer in the Inception model. In this case it is layer number 7 (index 6). This layer recognizes more complex shapes in the input image and the DeepDream algorithm will therefore produce a more complex image. This layer appears to be recognizing dog-faces and fur which the DeepDream algorithm has therefore added to the image.

layer_tensor = model.layer_tensors[6]img_result = recursive_optimize(layer_tensor=layer_tensor, image=image,num_iterations=10, step_size=3.0, rescale_factor=0.7,num_repeats=4, blend=0.2)

Image 4: After applying Deep Dream Algorithm

This is an example of maximizing only a subset of a layer’s feature-channels using the DeepDream algorithm. In this case it is the layer with index 10 and only its first 3 feature-channels that are maximized.

layer_tensor = model.layer_tensors[10][:,:,:,0:3]img_result = recursive_optimize(layer_tensor=layer_tensor, image=image,num_iterations=10, step_size=3.0, rescale_factor=0.7,num_repeats=4, blend=0.2)

Image 5: After applying Deep Dream Algorithm

layer_tensor = model.layer_tensors[4]img_result = recursive_optimize(layer_tensor=layer_tensor, image=image,num_iterations=10, step_size=3.0, rescale_factor=0.7,num_repeats=4, blend=0.2)

Image 6: After applying Deep Dream Algorithm

# To save the final Output

image_save=save_image(img_result,"test_output/test_output_12.jpg")

If this is not enough I have uploaded one video on YouTube which will further extend your psychedelic experience.

Conclusion:Well that’s it this article showed you how using tensorflow and couple of concepts you can too create a Deep Dream experiences on your own.

Special Mention: This article would have been impossible without the direction given by Magnus Erik Hvass Pedersen through his famous TensorFlow tutorials the GitHub repository can be found here.

Resources:

  1. For GitHub repository click here.
  2. To increase understanding towards Deep Dream do go through Google Research blog post.

Thank you for your attention

You using your time to read my work means the world to me. I fully mean that.

If you liked this story, go crazy with the applaud( 👏**)** button! It will help other people to find my work.

Also, follow me on Medium, LinkedIn or Twitter if you want to! I would love that.

Naveen Manwani - Medium_Read writing from Naveen Manwani on Medium. A Machine Learning Engineer, a Deep learning enthusiast |Google India…_medium.com

Naveen Manwani - Machine Learning Engineer - AIMonk Labs Private Ltd | LinkedIn_View Naveen Manwani's profile on LinkedIn, the world's largest professional community. Naveen has 1 job listed on their…_www.linkedin.com

naveen manwani (@NaveenManwani17) | Twitter_The latest Tweets from naveen manwani (@NaveenManwani17). Machine Learning Engineer @ AIMONK Labs Pvt ltd ,Deep…_twitter.com


Published by HackerNoon on 2018/12/27