Deploy Phoenix to Ubuntu VPS - Finishing Deployment

This is the third part of a series on deploying a Phoenix Application to your own VPS.
Deploy Phoenix to Ubuntu VPS - Finishing Deployment

This is the third part of a series on deploying a Phoenix Application to your own VPS.

In the last section we configured our app and production environment, and used Edeliver to with Distillery to build and deploy the sample application.

At this stage, our app should be running in production but we’re not able to access it as we haven’t configured Nginx to route requests to it.

Series

Deployment Goals

Let’s re-cap what we set out to do with this series:

  • Build and Deploy on a Single Server
  • Local Postgres Database
  • Nginx as a reverse proxy
  • SSL Certificate with live domain
  • Systemd for auto-restarts
  • Managing production deployment:
  • Deploy / Rollback / Restart application releases
  • Run production database migrations
  • Check production application logs

So far we have only a few of these checked so let’s polish the deployment.

Nginx Configuration

If you’ve been following along the goal of this series is to keep things as simple as possible to get a feel for running a Phoenix application in production. And with a decent understanding of how things work together we can improve the process as needed.

Nginx is currently installed on our system (provided you completed Part 2) so let’s add a configuration for the domain we pointed to our server.

Log in to your DO server as deploy.

And remove the default configuration as it’ll sometimes override your sites config if server_default is set.

$ sudo rm /etc/nginx/sites-enabled/default
$ sudo rm /etc/nginx/sites-available/default

Create a new file:

$ sudo vim /etc/nginx/sites-available/deploy-example.phxroad.com

NOTE: It’s important to name the config file the same as your domain so Certbot knows where to install the SSL configs.

Add the following content to deploy-example.phxroad.com

upstream deploy-example {
  server 127.0.0.1:4005;
}

server {
  listen 80 default_server;
  listen [::]:80 default_server;
  if ($host = deploy-example.phxroad.com) {
    return 301 https://$host$request_uri;
  }
  server_name deploy-example.phxroad.com;
}

server {
  listen [::]:443 ssl http2;
  listen 443 ssl http2;
  server_name deploy-example.phxroad.com default_server;

  location / {
    allow all;
    proxy_pass http://deploy-example;
    proxy_http_version 1.1;
    proxy_set_header Host $host;
    proxy_cache_bypass $http_upgrade;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Cluster-Client-Ip $remote_addr;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection 'upgrade';
  }
}

If we slow skim through this configuration we’ll notice a few things:

The first server block is listening on port 80 (non-ssl) and redirecting all traffic to port 443.

At the top, we’re using nginx’s upstream module to proxy requests from the server blocks to port 4005 which is the port our application is configured to listen on. default_server tells nginx that we would like this config to be first in line.

The proxy settings are numerous but you can do some more reading about them if you choose.

The last two settings:

proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';

Are important for allowing websockets to work properly.

Symlink and Verify Nginx Config

Symlink the config into the sites-enabled directory.

$ sudo ln -s /etc/nginx/sites-available/deploy-example.phxroad.com /etc/nginx/sites-enabled/

Now we can verify that the configuration is valid:

$ sudo nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

NOTE: If you have any issues here, they’ll need to be resolved before you can move on.

Install Certbot

With Nginx config mostly in place, let’s use certbot to generate an SSL certificate and automatically insert it into our nginx config:

$ sudo apt -y install software-properties-common
$ sudo add-apt-repository universe
$ sudo add-apt-repository ppa:certbot/certbot
$ sudo apt update
$ sudo apt -y install certbot python-certbot-nginx

Use certbot to generate a certificate and insert the config into the nginx config:

(Your domain must be pointing to the DO Server at this point)

$ sudo certbot --nginx -d deploy-example.phxroad.com

Fill out the email if you like…

Then select #2 redirect traffic to SSL:

alt text

Certbot will have added lines to the nginx config:

alt text

Now we can restart nginx and test that our application is loading:

$ sudo service nginx restart

Load the URL in Browser

If we visit the URL we configured in the browser, we should see our Phoenix app running in production behind SSL.

Debugging Issues

If your app doesn’t load for some reason there are a few things we can verify.

1. Check App Config

First, double check config/prod.exs to ensure that they have values that line up with your app:

import Config

config :deploy_example, DeployExample.Repo,
  load_from_system_env: true,
  pool_size: 10

  config :deploy_example, DeployExampleWeb.Endpoint,
    load_from_system_env: true,
    cache_static_manifest: "priv/static/cache_manifest.json",
    server: true,
    code_reloader: false

config :logger, level: :info

deploy_example is the name of the app, DeployExample is the lib module, and DeployExampleWeb is the main web module.

2. Check Environment Variables

Double check that the ENV variables on the server are set to what you expect as far as the domain, DB url and PORT go.

3. Check Application Logs

There may be something useful in the logs in the directory

/home/deploy/deploy_example/deploy_example/var/log/erlang.log.xxx

4. Verify App is Running on Port 4005

You can also run this command to verify that your app is running on the correct port:

$ sudo lsof -i -P -n | grep LISTEN
beam.smp  17242  deploy  25u  IPv6 257337  0t0  TCP *:4005 (LISTEN)

Hopefully you are able to debug any issues with these strategies, if not feel free to hit me up on Twitter or in the ElixirForum.com

Set up Systemd

systemd is a suite of basic building blocks for a Linux system. It provides a system and service manager that runs as PID 1 and starts the rest of the system. Using systemd will enable our application to start automatically on reboot. Edeliver has an example outlined here so let’s start with it.

We’re going to use a strategy described in the Distillery docs.

Specifically we’re going to “Run app in foreground using a simple systemd configuration”.

Properties of this approach:

  • Your app will be automatically restarted if it crashes
  • Logging is handled by systemd, which makes for better integration with log aggregation tools
  • It is a less cumbersome setup (does not require any pidfile and associated detection)

Let’s create the service file:

$ sudo vim /lib/systemd/system/deploy-example.service

And add the config. We’re re-setting the environment variables here for simplicity, but feel free to customize the configuration.

[Unit]
Description=Phoenix Deploy Example
After=network.target

[Service]
Type=simple
User=deploy
Group=deploy
ExecStart=/home/deploy/deploy_example/deploy_example/bin/deploy_example foreground
Restart=on-failure
RestartSec=5
Environment=PORT=4005
Environment=APP_HOST=deploy-example.phxroad.com
Environment=DATABASE_URL=ecto://<user>:<pass>@localhost:5432/myapp_prod
Environment=SECRET_KEY_BASE=<secret-key-base>
Environment=LANG=en_US.UTF-8
SyslogIdentifier=deploy-example
RemainAfterExit=no

[Install]
WantedBy=multi-user.target

Enable the service and reload the systemd process:

$ sudo systemctl enable deploy-example.service
$ sudo systemctl daemon-reload

View the logs

$ journalctl -u deploy-example.service --since "4 minutes ago"

Reboot the Server

We can test the service out by rebooting the server:

$ sudo reboot

Within a few minutes your application should be live again.

Quick Note about Migrations

Edeliver has built in comands to manually migrate your DB after release:

$ mix edeliver migrate production

But you can also add this as a build task or any number of ways. I’ll leave that decision up to you.

You can also learn alot about the edeliver commands by using help:

$ mix edeliver -h

Which contains all the detail you’ll need to manage your deployment.

Wrapping Up

If this is your first exposure to deploying Elixir apps, you’re probably feeling a little overwhelmed.

As you get more and more familiar with how these tools work together and start fixing your own issues in production, it’ll start to become second nature.

As a reminder that this Series was designed to demonstrate a simple Phoenix deployment scenario.

This has only been the beginning of a complex subject but can serve as a foundation for customizing your own deployment strategy.

Goals Re-cap

Here’s what we started out attempting to accomplish:

  • Build and Deploy on a Single Server
  • Local Postgres Database
  • Nginx as a reverse proxy
  • SSL Certificate with live domain
  • Systemd for auto-restarts
  • Managing production deployment:
    • Deploy / Rollback / Restart application releases
    • Run production database migrations
    • Check production application logs

And it seems like we can check all these boxes.

Thanks a lot for following along and I hope you found this series useful!

Series

Troy Martin
Senior Engineer working with Elixir and JS.
Keep in the Loop

Subscribe for Updates