How Artists Can Set Up Their Own Neural Network — Part 3 — Image Generation

Written by jackalope | Published 2018/12/07
Tech Story Tags: neural-networks | tutorial | artist | setup-neural-network | artist-neural-network

TLDRvia the TL;DR App

Alright, so we’ve installed linux and the neural network now it’s time to actually run it!

First though I want to apologize for the delay in getting these last two parts of the tutorial series out. As I explained in my Skonk Works post, I’ve been learning so fast that it’s actually been kind of hard to catch time to digest and write any of it down.

For instance, this tutorial series began with teaching you how to install Ubuntu 16.04, but support for Ubuntu 16.04 has just ended and you really should install Ubuntu 18.04, which is what I did after wiping my desktop and turning it into a fulltime personal cloud server! This is good because now I have a completely dedicated Linux machine to run neural network batch jobs on 24/7! The process of figuring out this new workflow though has been very chaotic and as a result I only had enough time recently to make sure my work pipeline was stable and went through the trouble of reinstalling Deep Style (I’ve also updated the original part 2 article with some tips on installing it on 18.04)

The good news is, now I know a ton more about Linux! I’ve got some other irons in the fire that I need to tend to right now, but I’m hoping to do a sequel or update to the series using what I’ve learned to create an even easier and largely automated process for artists and other non-programmers to use.

For now though I want to finish this series out properly and so this week we’re going to be talking about how to actually actually run DeepStyle and how that process can be turned into a script which can generate a massive amount of imagery for you to use. At the end of the tutorial I’ll even include a link to a Bash shell script that you can just plug your own variables into!

To run neural style you need to open up your terminal.

my linux desktop looks way different now

How to run a DeepStyle command

You then need to:

cd ~/neural-style

This will “change directory” to the folder “neural-style” where the neural network implementation DeepStyle has been installed.

To run the neural network you need to run a command. In the last tutorial I told you to use the following command to test to see if your install was working correctly:

th neural_style.lua -gpu 0 -print_iter 1

Let’s break this down piece by piece.

th

th is short for "torch" which is the neural network framework you installed. You are calling it as a program to do something.

neural_style.lua

neural_style.lua is the script created to run DeepStyle. It has the file extension of "lua" because it is written in the lua language. Some computer programs have to be compiled (which you did in the last tutorial to get DeepStyle working) but some programs run at a higher level and you can just run them as text files. These "scripting languages" tend to be easier to work with.

So torch is calling the neural_style lua script. But you need to tell it what you specifically want to do! This can be done with flags. Flags look like this:

-gpu 0

or this:

-print_iter 1

or this:

-this_aint_a_real_flag true

Basically it’s going to be -something with a dash next to it that names the setting being used followed by what the setting is being set to. Pretty simple right?

My Recommended Command

Here’s a pretty standard command you can run:

th neural_style.lua -save_iter 400 -image_size 512 -style_image library/van-gogh.jpg -content_image library/picture-of-me.jpg -num_iterations 1200 -gpu 0 -output_image library/output/picture-of-me/van-gogh-picture-of-me.png

And now I’ll explain each of the flags.

-save_iter 400

This means that the neural network should save out a progress picture every 400 iterations. Neural networks that transform imagery in this way do so over iterations. I recommend you read my very first neural network and art article (which is not a tutorial) which goes over some of the conceptual background of neural networks and links to some resources that might help stuff make sense. With the above setting I’m having it save a progress pic every 400 iterations but you could do more if you wanted to make something animating into the style which can be pretty cool in my experience.

-image_size 512

The above is where I specify the output image size. I have a 970 gpu on my desktop and 512 is about the biggest I can reliably output. You might be able to do more, but it will be hard even with a more powerful machine because as the size scales, it does not do so linearly but exponentially. In other words, you double the size of the image, you quadruple the size of the firepower needed. Because it’s not just increasing the width but also the length.

-style_image library/van-gogh.jpg

This flag tells DeepStyle which image to use as your style image reference. (again, refer to the first article to get the terminology of style versus content image down). In this case I have created a folder in the neural-style folder named library and I have saved a van-gogh painting there as a jpeg.

-content_image library/picture-of-me.jpg

That’s setting the content image. I also put it in the library folder. I like to keep all my style, content, and output images in a separate folder from the neural network files so that I don’t get things super messy as I’m working. (and actually now that I have my desktop set up as a personal cloud server, I have it automatically output stuff to a cloud folder where it gets downloaded on to my laptop where I do the actual painting in Photoshop etc).

-num_iterations 1200

This tells DeepStyle the total Number of iterations I want to make. Hypothetically the more iterations that you make the more similar you can make the content image to the style image, but I find that it’s basically diminishing returns past 1200.

-gpu 0

This tells DeepStyle to use the gpu.

-output_image library/output/picture-of-me/van-gogh-picture-of-me.png

The final piece! This tells DeepStyle where to output the image. In this case I am having it create a folder inside of library called “output” and then I’m having it create a folder named for the content image used, and then I’m giving it a name which contains both the name of the content image and the style image. I think it’s a good idea to do this because once you start batch processing this stuff you’re going to get A WHOLE LOTTA outputs and it’s going to be had to keep them all organized.

So that’s the basic breakdown of the standard script. But the great thing about code is it’s really good at doing repetitive stuff over and over again, so even though we could copy and paste this over and over again, and change the name of the images and outputs each time, we don’t have to! I made a script for it!

Get it Here

Automate it!

Here’s what that code looks like:

function neural_fusion() {th neural_style.lua -save_iter 400 -image_size 512 -style_image library/$1.jpg -content_image library/$content.jpg -num_iterations 1200 -gpu 0 -output_image library/output/$content/$content-$1.png};content=mantis;mkdir library/output/$content;neural_fusion zebrasmoothie;neural_fusion wolfhowl;neural_fusion wood;

So here’s what that code means. First I make a function. A function is just a piece of code that stores a series of actions . It’s kinda like a magic spell that I write down on a scroll so that I don’t have to remember all the complicated words every time I want to use it. This is the function:

function neural_fusion() {th neural_style.lua -save_iter 400 -image_size 512 -style_image library/$1.jpg -content_image library/$content.jpg -num_iterations 1200 -gpu 0 -output_image library/output/$content/$content-$1.png};

You can see it contains the command with flags to run DeepStyle. It also has some funny things with dollar signs too. Those are variables. Variables are like functions but even simpler. You probably know them from math class. X+Y=250 and then you have to figure out what X is right? X and Y are just stand ins for other numbers, or in this case for words.

content=mantis;

Here I’m setting the variable $content to equal mantis. As you can see, $content is used as the name of the content image. So I have a image in library called "mantis.jpg".

mkdir library/output/$content;

Here, I’m making a new directory inside my output folder called “mantis” in order to put my mantis content image outputs. Keep things tidy!

neural_fusion zebrasmoothie;neural_fusion wolfhowl;neural_fusion wood;

And here is where I’m actually running DeepStyle. Each line is me calling the neural_fusion function I made and giving it a variable to attach to the style image which in the function is represented as $1. I don’t have to declare this function the way I did with mantis and content because it’s going to vary between each run of the function.

And that’s basically it! You can just run a bunch of these and get a HUGE number of images.

Make them bigger!

You need to make those image resolutions bigger if you want to use them to paint, right? But… how could you make an image bigger? I mean you can make images bigger but then they get all pixelated…. right?

Well, that’s where neural network magic comes in again. We’re going to use a neural network called “Waifu2X”. Waifu is japanese nerd slang for wife. But it’s a particular kind of wife. It’s…. well it’s an anime wife. It’s like… a wife you have who is animated. Basically it’s just a way a Japanese nerd says they really like a particular character…. uh… look I’m not going to explain it to you. You can look it up.

Point is! Some people like anime a lot. They like it so much they trained a neural network to MAKE BIGGER ANIME!

(also don’t look up “big anime” on google image search right after looking up waifu.)

That’s what Waifu 2x is. It’s a neural network that was trained on thousands of frames of anime at 1080p and at 4k resolutions so that you can give it a 1080p image and it will make it 4k. OR you can just give it any size image and it will make it bigger. The smaller the starting picture, the less info for it go off of, but it’s pretty impressive all things considered! I’m surprised that Adobe hasn’t already tried packaging this into Photoshop. Probably because this neural network has the potential to completely destroy the stock photo industry which they’ve been trying to get into but… you know…

Anyway I’m not going to go as in detail on this one because it’s easy to install on Windows, and it uses a GUI, so most of it is pretty self explanatory.

Basically you just gotta download from here.

Unzip it. And then you just open up the folder and click on the .exe file and the GUI will open up:

Select your input path, select the output path, and then set the magnification size. 2.00 means it will be twice as big. 4.00 means 4 times as big. Pretty basic stuff. Denoising can be helpful, though it can also wash out a lot of the texture that the style image imparts, and it can lead to a sort of uniform “plasticky” look. The rest of the settings you don’t really need to mess with. You select a folder as the input path to have a big batch set of images made. It’s all pretty simple.

NEXT WEEK IS THE FINAL PART!

(but not the final article on this subject. I’ve got lots more coming, so subscribe to my RSS, or newsletter, or facebook page. And as always, if you have any questions or comments, please be sure to reach out. Getting “fanmail” after my absence was one of the things that encouraged me to keep it up and come back to finish the series out. I didn’t want to let ya’ll down! )

Originally published at Jackalope.tech.


Written by jackalope | design technologist
Published by HackerNoon on 2018/12/07