When I started using docker, I used to deploy the source code of my application statically into docker images. This is a good and recommended approach for production.
But while building images for testing team, I realized the build process needs to repeat and is not a very optimal approach. If you just need to take in a new commit, a different branch or a latest master checkout, you need to rebuild the image.
In this article I will showcase, how we would deploy our code dynamically into images and allow frequent and easy updates to the source code.
Though in our case, we were deploying PHP, NodeJS and Java code, for sake of simplicity I would be using a simple NodeJS application in this article.
You can get the details about the sample application here. The sample app is also available on tarunlalwani/docker-nodejs-sample-app as we would be deploying code using git.
I will show both Static and Dynamic version and what changes we made from Static to adopt for the dynamic deployments
1 - Static Code Deployments
We have a basic Dockerfile
for this
FROM node:boron
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Install app dependencies
RUN git clone https://github.com/tarunlalwani/docker-nodejs-sample-app.git /usr/src/app
RUN npm install
EXPOSE 8080
CMD [ "npm", "start" ]
Next we build and run the image
$ docker build -t staticbuild .
$ docker run -d --name staticbuild -p 8080:8080 staticbuild
$ curl http://localhost:8080/
Hello world
$ docker stop staticbuild && docker rm staticbuild
As we can see, the image runs fine and gives the “Hello world” output.
Advantages of Static code
- Immutable Images: You know what code was deployed and it doesn’t get changed
- Easier for Rollback: If one version goes wrong, it is quite easy to swicth back to previous version by just calling the image
- Faster load time: Since everything we need is present in the image, it loads faster
2 - Dynamic Deployments
To support dynamic deployments we need few things
- deployment script: Checkouts the code from git
- startup script: Start the node servers
- Script ordering: Deployment script should be finished and then start script should be run
Since we have simplified our requirements there are two ways to approach this. First way is using Bash scripts, and second way using Supervisord
Using BASH Scripts
We create 2 seperate scripts in a scripts
folder.
./scripts/deploy_app.sh
#!/bin/bash
set -ex
# By default checkout the master branch, if none specified
BRANCH=${BRANCH:-master}
cd /usr/src/app
git clone https://github.com/tarunlalwani/docker-nodejs-sample-app .
git checkout $BRANCH
# Install app dependencies
npm install
./scripts/run_app.sh
#!/bin/bash
set -ex
cd /usr/src/app
exec npm start
We update our Dockerfile for the changes
FROM node:boron
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY ./scripts /scripts
EXPOSE 8080
CMD [ "bash", "-c" , "/scripts/deploy_app.sh && /scripts/run_app.sh"]
Note: I have set -ex in scripts.
e
is to make sure the script exits on any error andx
is to show which command is being executed
Now let us run the code
$ docker build -t dynamicbuild .
$ docker run --name dynamicbuild -d -p 8080:8080 dynamicbuild && docker logs -f dynamicbuild
b257d0bfa7b46c6a019f50a01183920ccf18b1a876234174b56bfc0672c6753f
+ BRANCH=master
+ cd /usr/src/app
+ git clone https://github.com/tarunlalwani/docker-nodejs-sample-app .
Cloning into '.'...
+ git checkout master
Already on 'master'
Your branch is up-to-date with 'origin/master'.
+ npm install
npm info it worked if it ends with ok
npm info using npm@3.10.10
npm info using node@v6.10.3
...
...
npm info lifecycle docker_web_app@1.0.0~start: docker_web_app@1.0.0
> docker_web_app@1.0.0 start /usr/src/app
> node server.js
Running on http://localhost:8080
Now to deploy a different branch we can either pass the environment variable using command line switch -e
or we can use the approach described in this article
Using Supervisord
Supervisord is a process manager and can be used to start multiple processes inside a docker container. There are two ways to install supervisor
- Using pip or pip3
- Using package manager for your distro
We won’t use 1
as it create dependecy on python and make the image fat in size. Since the node image is debian based, we can use the apt package manager to install supervisor.
Supervisor uses ini based config files for specifying the programs to be run. The config files need to be placed in /etc/supervisor/conf.d
. So our first task is to create the config file for running deploy_app.sh
and run_app.sh
supervisor-app.conf
[program:deploy_app]
command=/scripts/deploy_app.sh
autostart=true
autorestart=false
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
[program:run_app]
command=/scripts/run_app.sh
autostart=false
autorestart=true
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
Note 1: We have used autostart=false for run_app, because we don’t want it to run automatically. We want the deploy_app to start the run_app, when deployment is complete. autorestart=true is used to make sure that server keeps running once it’s started
Note 2: We have used stdout_logfile=/dev/stdout, as it redirects the output from the program to supervisor log, which in turn shows in the docker log. See this article for more details.
Now we need a small change in our deploy_app.sh
to start the run_app
supervisor service at the end of the script, so we add the below line of code to our script
supervisorctl start run_app
So here is our updated Dockerfile
FROM node:boron
RUN apt update && apt install -y supervisor
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY supervisor-app.conf /etc/supervisor/conf.d/
COPY ./scripts /scripts
EXPOSE 8080
CMD [ "supervisord", "-n"]
Note: We have used the
-n
flag so that supervisord runs in foreground instead of background, otherwise our docker cotainer will exit at the start
Now we rebuild the image and run it again
$ docker build -t dynamicbuild-supervisor .
$ docker run -p 8080:8080 -d --name dynamicbuild-supervisor dynamicbuild-supervisor && docker logs -f dynamicbuild-supervisor
/usr/lib/python2.7/dist-packages/supervisor/options.py:296: UserWarning: Supervisord is running as root and it is searching for its configuration file in default locations (including its current working directory); you probably want to specify a "-c" argument specifying an absolute path to a configuration file for improved security.
'Supervisord is running as root and it is searching '
2017-05-13 06:37:01,984 CRIT Supervisor running as root (no user in config file)
2017-05-13 06:37:01,985 WARN Included extra file "/etc/supervisor/conf.d/supervisor-app.conf" during parsing
2017-05-13 06:37:01,992 INFO RPC interface 'supervisor' initialized
2017-05-13 06:37:01,993 CRIT Server 'unix_http_server' running without any HTTP authentication checking
2017-05-13 06:37:01,993 INFO supervisord started with pid 1
2017-05-13 06:37:02,996 INFO spawned: 'deploy_app' with pid 7
+ BRANCH=master
+ cd /usr/src/app
+ git clone https://github.com/tarunlalwani/docker-nodejs-sample-app .
Cloning into '.'...
2017-05-13 06:37:04,006 INFO success: deploy_app entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
+ git checkout master
Already on 'master'
...
...
npm info lifecycle docker_web_app@1.0.0~start: docker_web_app@1.0.0
> docker_web_app@1.0.0 start /usr/src/app
> node server.js
Running on http://localhost:8080
2017-05-13 06:37:19,146 INFO success: run_app entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
run_app: started
2017-05-13 06:37:19,156 INFO exited: deploy_app (exit status 0; expected)
Note: My personal preference is to map the
/etc/supervisor/conf.d/
and/scripts
folder from host inside the docker container. This allows more customization of images and less rebuilds
Advantages of Dynamic code
- Better Customization: We can easily customize what gets deployed. We can even parameterize the REPO url to deploy other NodeJS project
- Generic implementation: We can have more and more parameters and generalize the base image to be used for different stacks
- Less Rebuild of Docker image: In this case Docker images would change quite less and there would be no need to rebuild