A Better Way to Develop Node.js with Docker

Written by patrickleet | Published 2019/01/18
Tech Story Tags: docker | nodejs | docker-compose | software-development | nodejs-and-docker | latest-tech-stories | dockerfile | nodemodules

TLDR A Better Way to Develop Node.js with Docker with Docker is essential for developing React apps. Patrick Lee Scott explains how to use Docker for development with Docker. The Conventional Wisdom is the first thing introduced is the Dockerfile. A Dockerfile is a way to package your application, and honestly, you really shouldn’t. Development and production are not the same environment. If your Dockerfile contains an npm install you’ve gone too far. If you want to use a Dockerfile for development, it’s the wrong approach for development.via the TL;DR App

And Keep Your Hot Code Reloading

I’ve seen a lot of articles lately suggesting how to use Docker for development. I haven’t seen one yet that does it correctly.
Obviously, correctly, is subjective, but I’d like to compare the typical wisdom, vs. how I usually approach the problem.

The Conventional Wisdom

In many tutorials, the first thing introduced is the Dockerfile.
At the foundation of any Dockerized application, you will find a Dockerfile — https://blog.codeship.com/using-docker-compose-for-nodejs-development/
It apparently, is the foundation.
The first several results on Google all suggest the first thing you need is a Dockerfile, as well.
After all, how can you have a Docker environment without creating a Dockerfile?
I’m here to tell that while this is true for production, it’s the wrong approach for development. You do not need to create your own.
A Dockerfile is a way to package your application. You don’t need to package your application for development, and honestly, you really shouldn’t.
Development and production are not the same environment.
When you develop on your MacBook you install different tools than you use running in production. Just because “it runs the same way everywhere” doesn’t mean it should.
Your app runs differently in development.
Packaging it in a time where it is meant to be flexible and malleable is why many engineers have come to the conclusion the Docker isn’t for development.
You lose the flexibility of development by needing to build new containers when you have changes to dependencies change for example.
Sure, you could exec into the container and perform some commands, install some libs, but is it really less effort at this point?
Now, some of the above articles got this more right than others, but if you’re using a Dockerfile for development, you’ve probably already gone too far. There are situations where you will want one, but probably not in the manner you think.
Hint: If your Dockerfile contains an npm install you’ve gone too far.

The docker-compose builder pattern

Let’s talk about what Docker is for a moment.
Docker is a way to package your code. This is the typical context for using Docker.
Docker is also a way to create an isolated environment which is capable of executing certain types of applications.
Docker allows you to package environments that are capable of running your code.
When you use Docker for production you are using the most specialized Docker containers you can make. They are customized and specifically built for your application, packaged in the way you built it. For this purpose, creating a Dockerfile makes sense.
When you set up your computer for development, that’s not what you do. You instead install the tools that you need for development. You just need to create an environment which your code can run in.
This means you can use a more generalized Dockerfile. Usually, these generalized Dockerfiles you need for development already exist.
For example, when developing a Node.js application, you need
node
installed on your machine. That’s it.
You don’t need alpine linux. You don’t need to package your
node_modules 
into an immutable build. You don’t need little containers to exec into to make significant changes. You just need to be able to execute
node
and
npm
.
Therefore, in a container, that’s all you need as well, meaning the official
node
image on Docker Hub will do just fine.
Without further ado, my approach to development with Docker.
In my last article I showed how to use Parcel for development and production. Let’s keep that rolling, and build on top of that.
I think it’s a good example because Hot Module Reloading is essential for developing React apps efficiently.

Step One

First, we need a docker-compose file. In it, we need our development environment. Seems how we are making a node app, that means the officalnode image is probably a safe bet.
Let’s add a file
docker-compose.yml
:
version: '3'
services:
  dev:
    image: node:11
Next, we need our code to be in the environment, but we don’t want it to be baked into the image. If we are using this for development, when our files change, the files in the container also need to change.
To accomplish this we can use a volume. We will mount our current directory 
.
to
/usr/src/service
in the container. We will also need to tell docker where our “working directory” is. Meaning — what directory did we put the code in?
version: '3'
services:
  dev:
    image: node:11
    volumes:
      - .:/usr/src/service
    working_dir: /usr/src/service
Now, every time we make a change on our local machine, the same file changes will be reflected in
/usr/src/service
.
Next, we need to execute the command
npm run dev
. This is easily accomplished with a
command
. We also want to access it locally on port
1234
.
Finally, hot module reloading with Parcel happens on a random port by default, which won’t work for us, as we need to map the HMR port as well.
Modify the
dev
script in
package.json
to include the option
--hmr-port=1235
.
"dev": "npm run generate-imported-components && parcel app/index.html --hmr-port 1235",
And with that in place, let’s update the Docker file to map the ports on our local machine to the same ports on our container.
version: '3'
services:
  dev:
    image: node:11
    volumes:
      - .:/usr/src/service
    working_dir: /usr/src/service
    command: npm run dev
    ports:
      - 1234:1234
      - 1235:1235
If you’ve done enough Node development, you’ll notice we have a problem. You can’t just run a node app without installing dependencies.
Also, you can’t just install your node modules locally on Mac or Windows and expect them to work on the linux container.
When you run a build in some cases libraries compile natively and the resulting artifact only works on the operating system it was built on!
As a first attempt, you may be tempted to just chain
npm install
and
npm run dev
in a single command, and sure enough that would work, but it’s not quite what we want. This would require to run an install every time we started development mode with the container.
Also, some services beyond needing an install, also might need a build step. In our case, this isn’t needed for developing the client because parcel or nodemon handle it, but not all apps were built in the past week with the latest tech.
For educational purposes the way to chain commands is using
bash
or
ash
to execute the command. If you try
command: npm install && npm run dev
You will learn that doesn’t work. Instead you can could use.
command: bash -c "npm install && npm run dev"
This would in fact work, but is not the optimal solution we are looking for.
Which brings us to Step Two.

Step Two

Let’s create another docker-compose file, this time named
docker-compose.builder.yml
.
We will need to use
version: 2
this time to make use of a feature in
docker-compose
that isn’t available in the version 3 specification.
Version 3 is more suited towards use in production than version 2, which has more development friendly features.
UPDATE: V3 also supports this now in a slightly different syntax — would love a PR to get it updated :) Here’s the docs: https://docs.docker.com/compose/compose-file/#extension-fields
The first thing we want to define in
docker-compose.builder.yml
is a base image.
version: '2'
services:
  base:
    image: node:11
    volumes:
      - .:/usr/src/service
    working_dir: /usr/src/service
This should look pretty familiar. It’s the same base we use in our
docker-compose.yml
file.
Now, we can extend the base to execute a whole bunch of different commands. For example:
version: '2'
services:
  base:
    image: node:11
    volumes:
      - .:/usr/src/service/
    working_dir: /usr/src/service/
  install:
    extends:
      service: base
    command: npm i
  build:
    extends:
      service: base
    command: npm run build
  create-bundles:
    extends:
      service: base
    command: npm run create-bundles
Now, to install dependencies using a
node:11
image which matches our development service in
docker-compose.yml
we can run:
docker-compose -f docker-compose.builder.yml run --rm install
To install the versions of binaries needed.
Pro Tip: Admittedly, docker-compose -f docker-compose.builder.yml run — rm install, doesn’t really “roll off the tongue”, does it? I usually put this in a Makefile so can just run make install, etc.
After running the install, docker-compose up will bring up our development environment, which works exactly the same as it would on your local machine.
➜  docker-compose up
Creating stream-all-the-things_dev_1 ... done
Attaching to stream-all-the-things_dev_1
dev_1  |
dev_1  | > stream-all-the-things@1.0.0 dev /usr/src/service
dev_1  | > npm run generate-imported-components && parcel app/index.html
dev_1  |
dev_1  |
dev_1  | > stream-all-the-things@1.0.0 generate-imported-components /usr/src/service
dev_1  | > imported-components app app/imported.js
dev_1  |
dev_1  | scanning app for imports...
dev_1  | 1 imports found, saving to app/imported.js
dev_1  | Server running at http://localhost:1234
And when we make a change, hot code reloading works as expected!
All with no Dockerfile!

Bonus

I just wanted to quickly add an example Makefile that will
make
the commands easier to remember and use.
Create a file called
Makefile
:
install:
  docker-compose -f docker-compose.builder.yml run --rm install
dev:
  docker-compose up
Makefiles use tabs. Makefile’s will not work with spaces. 😢 👋 😬
Now you can run
make install
and
make dev
.

The End?

Not quite. It appears I’ve caused some confusion by sharing a local volume between two containers. Many were quick to point out that you can use a volume and suggested something like the following:
version: '3'
services:
  dev:
    image: node:11
    volumes:
      - .:/usr/src/service  
      - :/usr/src/service/node_modules
    working_dir: /usr/src/service
    command: npm run dev
    ports:
      - 1234:1234
      - 1235:1235
Which will allow
node_modules
within the container to live on it’s own, isolated completely from local.
While sound in theory, this will break the process we’ve just defined for sharing
node_modules
between the builder and the running container.
Not doing it, on the other hand, causes problems if you are moving between local and docker development, as
node_modules
would need to be deleted between each switch.
A happy medium is to use an “external volume” instead of the local volume. First, let’s update our
Makefile
to take care of that as well, with a
setup
script that simply calls the
docker volume create
command.
setup:
  docker volume create nodemodules
(Again, tabs not spaces)
With the volume created, we can now reference it from the bottom of each of our two docker-compose files. Add the following to the docker-compose files:
docker-compose.yml
version: '3'
services:
  dev:
    image: node:11
    volumes:
      - nodemodules:/usr/src/service/node_modules
      - .:/usr/src/service
    environment:
      - NODE_ENV=development
    working_dir: /usr/src/service
    command: npm run dev
    ports:
      - 1234:1234
      - 1235:1235
volumes:
  nodemodules:
    external: true
docker-compose.builder.yml
# ...
  base:
    image: node:11
    volumes:
      - nodemodules:/usr/src/service/node_modules
      - .:/usr/src/service/
    working_dir: /usr/src/service/
volumes:
  nodemodules:
    external: true
This changes our startup process slightly as well, as on the first run we need to make sure the volume exists with make setup.

Conclusion

You don’t always need to make a Dockerfile to use Docker! Oftentimes, for development, you can just use someone else’s!
I hope I’ve been able to show you an easy way to get up and running quickly with Docker, and docker-compose for development.
To learn about how to create a multi-stage build for production, in CI pipelines, or how to use docker-compose to run staging tests, check out my article: I have a confession to make… I commit to master.
In the next article, I show you how to enforce code quality using Linting, Formatting, and Unit Testing with Code Coverage, a critical step before we finish up with a production ready multi-stage Dockerfile to package our code.
Check out the other articles in this series! This was Part 2.

Written by patrickleet | HackerNoon's first contributing tech writer of the year.
Published by HackerNoon on 2019/01/18