Deploying multiple dockerized apps to a single DigitalOcean droplet using docker-compose contexts

Posted by in DevOps, updated on

Introduction

In this post we'll go over setting up a DigitalOcean Droplet to run multiple dockerized apps which we'll deploy to production using docker-compose and docker contexts. We'll also set up NGINX reverse proxies for the apps we want to expose externally.

Using docker-compose with contexts let's us run builds and deployments to remote servers from our local dev machine. This feature is available to docker-compose starting with release 1.26.0-rc2.

Note: this feature is relatively easy to use, especially if you've used docker-contexts before however is more suited for deploying small/hobby apps to a single server. There will be some downtime as you're releasing new builds (generally a few seconds) so if you require zero downtime deployments, rolling updates and multiple server orchestration then you should look at docker's swarm feature or kubernetes.

Create a DigitalOcean Droplet

First sign up at DigitialOcean and create a new project. You can name the project anything you want, this is just and identifier. Click on 'Get Started with a Droplet' and go through the following steps:

Note: link above is an affiliate link and will give you $100 free credit for the first 2 months.

  1. Select Ubuntu 18.04.x (LTS) x64.
  2. Leave the Droplet on Standard.
  3. Select the $5 per month option, you can always upgrade if required.
  4. Skip adding block storage.
  5. Select a data center.
  6. Select additional options, I selected monitoring.
  7. Leave authentication on SSH and add your key, see steps below if you don't have a key yet.
  8. Leave on 1 Droplet and rename it to something more memorable (optional).
  9. I enabled backups for my Droplet.
  10. Click on Create Droplet.

You should now see your droplet in the dashboard with its IP address, this is the address we'll need to SSH into the server.

Generate SSH keys

If you're running Ubuntu on your local machine first check if you have an existing SSH key:

ls -l ~/.ssh/id_*.pub

If you don't have any keys the output will say something along the lines of no matches found, otherwise you'll see the file path printed to the console.

To create a new key type the following and follow the interactive instructions. Add a password is optional.

ssh-keygen -t rsa -C "your_email@example.com"

You can now print the public key to your console with:

cat ~/.ssh/id_rsa.pub

# output:
ssh-rsa SOME_LONG_RANDOM_STRING_WITH_YOUR_USER_AT_THE_END

Add this key to your Droplet in the steps above so that you can SSH in as root.

Add a non-root 'superuser'

Next we'll create a non-root user that we'll use to log into the server in future. First SSH into the server as root using the Droplet IP address (which you can view in your DigitalOcean dashboard):

ssh root@droplet_ip_address

Add a new user, you can name the user whatever you like, in this case I will name the user deployer:

adduser deployer

Grant administrative privileges to the new user by adding them to the sudo group:

usermod -aG sudo deployer

Allow the new user to SSH into the server by copying the .ssh directory into the new user directory. We'll use rsync to preserve the file permissions (don't use cp):

rsync --archive --chown=deployer:deployer ~/.ssh /home/deployer

Configure the firewall

Now we'll set up a firewall using ufw and enable SSH connections. If you run ufw app list you should see OpenSSH listed in the output which is what we'll enable with:

ufw allow OpenSSH

Next enable the firewall:

ufw enable

Check that OpenSSH is enabled:

ufw status

# output:
Status: active
To Action From
-- ------ ----
OpenSSH ALLOW Anywhere

Now that SSH connections are allowed we can safely log out as root and log in as our new user (who we've set up with SSH keys in the previous step):

exit

# output:
logout
Connection to droplet_ip_address closed.

ssh deployer@droplet_ip_address

Note: once you have your domain's DNS records set up to point at DigitalOcean's name servers you'll be able to SSH in with ssh deployer@yourdomain.com

Install & configure NGINX

NGINX will allow us to route web browser requests to our dockerized apps. To install it run:

sudo apt-get update
sudo apt-get upgrade
sudo apt-get install nginx

Note: we're no longer logged in as root so we'll need to prefix most of our commands with sudo.

Now NGINX will show up in our ufw apps list and we can enable http, https or both (full). You generally only want to enable the minimum you require, however we'll be enabling https on our apps shortly so let's enable Nginx Full:

sudo ufw app list
sudo ufw allow 'Nginx Full'
sudo ufw status

# output:
Status: active

To Action From
-- ------ ----
OpenSSH ALLOW Anywhere
Nginx Full ALLOW Anywhere
OpenSSH (v6) ALLOW Anywhere (v6)
Nginx Full (v6) ALLOW Anywhere (v6)

Tip: you can use sudo ufw status verbose to view ports and additional firewall rules.

Now let's check that NGINX is running correctly:

systemctl status nginx

# output:
...
Active: active (running) since...

And visiting http://droplet_ip_address should display the default NGINX "Welcome to nginx!" page.

Next edit the nginx.conf file with sudo vim /etc/nginx/nginx.conf (replace vim with your  command line editor) and uncomment server_names_hash_bucket_size 64;:

http {
...
server_names_hash_bucket_size 64;
# server_name_in_redirect off;
...
}

Save the file and check that there are no syntax errors:

sudo nginx -t

# output:
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

And then restart NGINX:

sudo systemctl restart nginx

Install Certbot (SSL certificate provisioning)

Certbot is a free, open source software tool which will automatically provision and auto-renew SSL certificates for us. It used Let's Encrypt certificates to enable HTTPS. In later steps we'll configure NGINX to route all HTTP traffic to HTTPS.

To install Certbot:

sudo apt-get update
sudo apt-get install software-properties-common
sudo add-apt-repository universe
sudo add-apt-repository ppa:certbot/certbot
sudo apt-get update
sudo apt-get install certbot python-certbot-nginx

Install Docker

Install docker on the server, you can follow the official guides here, or the steps below. 

Note: you'll need to install docker on your local dev machine following the same steps (assuming you're running Ubuntu).

sudo apt-get update

sudo apt-get install \
apt-transport-https \
ca-certificates \
curl \
gnupg-agent \
software-properties-common

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"

sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io

Make sure docker is installed:

sudo docker -v

# output:
Docker version 19.03.8, build afacb8b7f0

And run the hello-world docker image as another test:

sudo docker run hello-world

# output:
Hello from Docker!
This message shows that your installation appears to be working correctly.
...

Run Docker without sudo (optional)

If you'd like to run docker without sudo then follow these steps. First create the docker group (you may get a message it exists already):

sudo groupadd docker

Then add your user to the group:

sudo usermod -aG docker $USER

Activate changes to the group with (or log out and back in to the session):

newgrp docker

Check that you can run docker without sudo:

docker run hello-world

Warnings and errors

If you get a loading config file warning:

WARNING: Error loading config file: /home/user/.docker/config.json -
stat /home/user/.docker/config.json: permission denied

Change the ownership and permissions of the docker group:

sudo chown "$USER":"$USER" /home/"$USER"/.docker -R
sudo chmod g+rwx "$HOME/.docker" -R

If you get a permission denied error:

docker: Got permission denied while trying to connect to the Docker
daemon socket at unix:///var/run/docker.sock...

Then restart docker:

sudo systemctl restart docker

If the problem persists change the permissions of the socket:

sudo chmod 666 /var/run/docker.sock

Install docker-compose

To use docker-compose to deploy to remote servers with the --context argument we need to install release 1.26.0-rc2or later. If the latest stable version here is under 1.26.0-rc2 then follow the instructions below, otherwise you can substitute the release number in the URL to the latest stable version. Official installation docs can be found here.

Note: you'll need to install docker-compose on your local dev machine following the same steps (assuming you're running Ubuntu).

sudo curl -L "https://github.com/docker/compose/releases/download/1.26.0-rc2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose

Then check docker-compose is installed correctly:

docker-compose -v

# output:
docker-compose version 1.26.0-rc1, build 07cab513

Enable CLI autocomplete (optional)

First make sure you have bash-completion installed by typing complete -p. If a long list of commands get printed to your console then it's installed, if not then you can install bash-completion with:

sudo apt install bash-completion

You may need to exit the server (type exit) and SSH back in for the changes to take effect. Check again with complete -p.

Next install autocomplete (substitute version in URL as required):

sudo curl -L https://raw.githubusercontent.com/docker/compose/1.26.0-rc2/contrib/completion/bash/docker-compose -o /etc/bash_completion.d/docker-compose
source ~/.bashrc

Check that autocomplete is working by typing docker  (with a space) and then pressing tab twice, you should see a list of docker commands if installed correctly. You can also type the beginning of a docker command  and press tab to autocomplete, e.g: docker ver -> tab -> docker version.

Configure your db and app docker images

Now our server is set up and we can start deploying our dockerized apps!

I'm going to deploy four different types of apps to my server, if you're using a different framework the steps will be similar with the main change being the Dockerfile used to build your images plus any framework configurations required for production.

Check out the following configurations for some context:

  1. Dockerizing a Phoenix app with a PostgreSQL database
  2. Dockerizing a Ruby on Rails with a PostgreSQL database
  3. Dockerizing a WordPress site with a MySQL database
  4. Dockerizing a Matomo web analytics app

Deploy your dockerized apps to the server

First make sure docker context is installed with the command docker context, this should list available context commands. To view existing contexts run docker context ls, at this point only the default context will be available:

docker context ls

# output:
default * .... unix://var/run/docker.sock .... swarm

Add a new context (the remote server we set up) and then check again that it was added correctly:

docker context create remote --docker "host=ssh://deployer@yourdomain.com"
docker context ls

# output:
default * .... unix://var/run/docker.sock .... swarm
remote .... ssh://deployer@yourdomain.com

Deploy your containers to the remote server, make sure you're in your project's root directory where the docker-compose.yml file is saved when running the following command:

docker-compose ‐‐context remote up -d

# output:
...
Creating your_app_db_1 … done
Creating your_app_web_1 … done

And then list the processes running on the remote server with:

docker ‐‐context remote ps

# output:
ONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c92c1d124djd your_app_web "/entrypoint.sh" 1 minute ago Up 1 minute 0.0.0.0:8080->8080/tcp your_app_web_1
eb13sd32df26 db_image_name "docker-entrypoint.s…" 1 minute ago Up 1 minute 5432/tcp your_app_db_1

If you run into errors then remove the remote context with docker context rm remote and add it again, but this time set it to swarm mode:

docker context create remote ‐‐docker "host=ssh://deployer@yourdomain.com" --default-stack-orchestrator swarm

You can also set the remote server as default with:

docker context use remote

Note: at this point the app shouldn't be accessible as we've only allowed ports 22 (SSH), 80 (HTTP) and 443 (SSL) with ufw however due to an issue with UFW, Docker and Ubuntu the port is exposed and visiting http://droplet_ip_address:port should allow you to access your app. This is an issue we fix at the end of this blog post.

Add your custom domain to DigitalOcean

Go to your DigitalOcean dashboard, click on the three ellipses on your droplet and select Add a domain.

Fill in your domain name without the www, e.g. yourdomain.com and click on Add Domain. Four DNS records will be created automatically for you, one A records and three NS (name servers).

To add a www version so people can reach your site through www.yourdomain.com we'll add a CNAME record. Click on CNAME and fill in Hostname with www. (including the dot), and for the Alias next to it yourdomain.com. (again with a dot) then click on Create Record.

Configure your DNS records

Now that our domain name is set up with DigitalOcean we need to configure the DNS records at our domain registrar. For this example I'm using namecheap, but the steps will be similar for other registrars.

In your domain list click on Manage and scroll to the Nameservers section. Click on Add Nameserver three times and fill in the following then save the changes with the green tick icon:

ns1.digitalocean.com
ns2.digitalocean.com
ns3.digitalocean.com

Configure NGINX

Next we need to tell NGINX to listen to port 80 (http requests) for our custom domain name and map it back to the port we configured in our docker-compose.yml file and our app's production config.

SSH into your server and create a new file in /etc/nginx/sites-available/ (you can name it yourdomain.com) with the following contents:

server {
listen 80;
server_name yourdomain.com www.yourdomain.com;

location / {
proxy_pass http://localhost:8080;
}
}

Next we need to create a symlink to the file for the /sites-enabled directory:

cd /etc/nginx/sites-enabled/
sudo ln -s /etc/nginx/sites-available/yourdomain.com ./

Check that the symlink was created successfully:

ls -l

# output:
... yourdomain.com -> /etc/nginx/sites-available/yourdomain.com

Next check that the NGINX syntax is OK:

sudo nginx -t

# output:
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

Finally restart the service:

sudo systemctl restart nginx

If you visit http://yourdomain.com you should now see your app's welcome page.

Provision SSL certificates

Since we haven't provisioned SSL certificates yet visiting https://yourdomain.com will fail. To fix that we're going to generate Let's Encrypt certificates using Certbot:

sudo certbot --nginx

Enter your email address if it's your first time using Certbot on this server, then select both domains and option 2 (redirect http to https).

Certbot will then modify the file in /etc/nginx/sites-available/yourdomain.com as follows:

server {
server_name yourdomain.com www.yourdomain.com;

location / {
proxy_pass http://localhost:8080;
}

listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/yourdomain.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/yourdomain.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}

server {
if ($host = www.yourdomain.com) {
return 301 https://$host$request_uri;
} # managed by Certbot

if ($host = yourdomain.com) {
return 301 https://$host$request_uri;
} # managed by Certbot

listen 80;
server_name yourdomain.com www.yourdomain.com;
return 404; # managed by Certbot
}

Your site will now be accessible through https and http will redirect to it. The default configuration doesn't redirect https://www.yourdomain.com to https://yourdomain.com and we'll fix that in the next step.

Modify NGINX config file to redirect www to non-www

Modify /etc/nginx/sites-available/yourdomain.com as follows:

upstream your_app_name {
server 127.0.0.1:8080;
}

server {
server_name yourdomain.com www.yourdomain.com;
listen 80;
return 301 https://yourdomain.com$request_uri;
}

server {
server_name www.yourdomain.com;

listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/yourdomain.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/yourdomain.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot

return 301 https://yourdomain.com$request_uri;
}

server {
server_name yourdomain.com;

location / {
proxy_pass http://your_app_name;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header X-Forwarded-Proto $scheme;
}

listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/yourdomain.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/yourdomain.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}

We've split the file into three server blocks as follows:

  1. When someone arrives at the domain via http (either www or non-www) NGINX will redirect them to https://yourdomain.com.
  2. When someone visits the https www version of the domain NGINX will redirect them as above.
  3. When arriving at https://yourdomain.com NGINX will serve the app via a proxy_pass.

Note that the upstream block at the top isn't required in this case, however I like the syntax as it let's me display the host and port at the top of the file and alias them (to your_app_name). You can delete this block if you change the proxy_pass line to:

location / {
proxy_pass http://127.0.0.1:8080;
...
}

Fix exposed docker port on Ubuntu and UFW

There's an issue docker bypassing ufw rules as described here. To fix this issue add the following to the bottom of /etc/ufw/after.rules:

# BEGIN UFW AND DOCKER
*filter
:ufw-user-forward - [0:0]
:DOCKER-USER - [0:0]
-A DOCKER-USER -j RETURN -s 10.0.0.0/8
-A DOCKER-USER -j RETURN -s 172.16.0.0/12
-A DOCKER-USER -j RETURN -s 192.168.0.0/16

-A DOCKER-USER -p udp -m udp --sport 53 --dport 1024:65535 -j RETURN

-A DOCKER-USER -j ufw-user-forward

-A DOCKER-USER -j DROP -p tcp -m tcp --tcp-flags FIN,SYN,RST,ACK SYN -d 192.168.0.0/16
-A DOCKER-USER -j DROP -p tcp -m tcp --tcp-flags FIN,SYN,RST,ACK SYN -d 10.0.0.0/8
-A DOCKER-USER -j DROP -p tcp -m tcp --tcp-flags FIN,SYN,RST,ACK SYN -d 172.16.0.0/12
-A DOCKER-USER -j DROP -p udp -m udp --dport 0:32767 -d 192.168.0.0/16
-A DOCKER-USER -j DROP -p udp -m udp --dport 0:32767 -d 10.0.0.0/8
-A DOCKER-USER -j DROP -p udp -m udp --dport 0:32767 -d 172.16.0.0/12

-A DOCKER-USER -j RETURN
COMMIT
# END UFW AND DOCKER

Then restart the service:

sudo systemctl restart ufw

In some cases you may need to restart the server as well with sudo reboot.

Deploying updates

Now that everything is set up deploying updates is a one liner. From your projects root directory run the following command:

docker-compose --context remote up -d --build

# output:
...
your_app_db_1 is up-to-date
Recreating your_app_web_1 ... done

The --build flag rebuilds your images which makes sure new changes are deployed.

Conclusion

Now you have a server configured and can easily push up any number of dockerized (or non-dockerized) apps you need. Just remember to assign a different port to each app and configure it in NGINX.

Good luck with your deployments!

 

Recent Posts