Optimizing for Free Hosting — Elixir Deployments

Kubernetes (kidding) — Photo credit Jp Valery on Unsplash

Another way to deploy Elixir apps

UPDATE: This article now uses an all-Elixir deployment strategy, instead of the Nginx reverse proxy strategy it had originally.

I’m going to walk through a deployment strategy that can take you start-to-finish with a full-featured Phoenix web app running on a free-forever Google Cloud Platform Compute Engine instance.

This setup is completely free, and when you have more traffic the cost of scaling on raw GCP instances is a lot cheaper than on platform-as-a-service offerings (Heroku, Gigalixir, Render, Fly, etc). And, you’re in full control of the system so you don’t have to work around any fixed limitations. Scaling can be as simple as upgrading the specs of the instance when it can’t keep up anymore.

What we’re optimizing for

  • Absolutely 100% free to start, cheap to scale
  • No Docker, Ansible, Distillery, or other build/deploy tools
  • Pure Elixir — no Nginx or external dependencies
  • Single server setup
  • Fault-tolerant at server level, e.g. if the server crashes or restarts the app will come back up automatically
  • Secure (enough, this can be an infinite rabbit hole)
  • Fast no-downtime deployments and rollbacks, without hot code reloading
  • Full control and visibility, leveraging raw Linux built-in tools

Free without the compromises (except your time of course)

I really like the level of control and flexibility that this approach provides. You’ll end up feeling a lot more comfortable with the raw basics of how your server runs and how to hook in and introspect when there are problems. It takes some time to set up, and there might be some new Linux skills that you’ll need to pick up along the way. This is a long guide, but stick with me and I think you’ll be really happy with the result! (or bookmark it and come back the next time you need to spin up a side project)

Overview

  1. Log into Google Cloud Platform and provision an F1-Micro Compute Engine instance
  2. Make the IP address of the instance static
  3. Add your SSH key to metadata in Compute Engine settings so you can connect to the instance
  4. Set up an ssh alias on your dev machine so it’s easy to access the server
  5. Enable swap memory on the instance
  6. Install Erlang, Elixir, Node, and Postgres
  7. Set a secure password for Postgres
  8. Configure Postgres to allow remote connections
  9. Connect to your remote Git repo where your project lives
  10. Get your app secrets onto the server
  11. Configure SSL with SiteEncrypt
  12. Forward ports 80 and 443 to the ones the app will listen on
  13. Build a deployment shell script for some basic CI checks and allowing incremental release versions and no-downtime deployments
  14. Create systemd services to ensure app starts up if server crashes or resets
  15. Commit and push a change. Then deploy using a single command on your dev machine!
  16. Attach to journald to watch logs
  17. Bonus 1: Secure your data against the possibility of eventual hardware failure by creating another Google account with a free instance and set up automated backups using cron
  18. Bonus 2: Rollback script

Update June 2, 2021

Ruslan Kenzhebekov emailed me and pointed me to a comment thread he’d had with Sasa Juric (the author of the Site Encrypt library) where Sasa mentioned that he used iptables switching to achieve zero-downtime deployments without nginx or a reverse proxy. That prompted me to learn (a lot!) and update this article as a result. You can still find the old version here if you prefer it.

The Details!

1 — Get a domain

If you want this whole thing to be super free, you can get a domain from https://freenom.com and get a full-fledged domain with DNS and the works for free. The downside is that you’re limited to one of five super random top level domains (.ml, .gq etc), but it’s a great way to get the name you want before committing the cash to buy the .com version. Once you have a domain, go to the DNS settings page — we’ll be adding some records later on.

2 — Provision an F1-Micro Compute Engine instance

You can access the Google Cloud Platform console using any Google account you own (typically a Gmail address). For the sake of separation (and security), I like to create a new gmail address with the name of my project in it so that the account is fully isolated (e.g. damon.my.app@gmail.com).

Name: whatever you want
Region: double check https://cloud.google.com/free to make sure the region you want is listed in the Compute Engine section
Series: N1
Machine Type:
f1-micro
Boot Disk:
Operating System: Ubuntu
Version: Ubuntu 20.04 LTS
Boot Disk Type: Standard persistent disk, 30GB
Firewall: allow both HTTP and HTTPS traffic

3 — Make the IP address of the instance static

Click the triple lines icon in the top left to bring up the menu, then go to VPC Network -> External IP addresses. There should be a single row, belonging to the instance you just created. Change the Type from “ephemeral” to “static” and give it a name of your choosing. Copy the IP address (should look something like 34.69.61.31), since you’ll need it in the next steps.

4 — Add your SSH key

Click the triple lines icon again in the top left and go to Compute Engine -> Metadata (under the Settings header). Click on the tab SSH Keys. Then click the “Add SSH Keys” button.

5 — Set up SSH alias

To connect to the server, we could run ssh <IP Address from step 3>, but it’s going to get tiresome to type in the IP all the time (especially if you have lots of these servers). So instead, we’ll set up an alias so that we can type in ssh my_app and have it connect automatically.

Host my_app
Hostname <IP address from step 3>
User <your local user name>
Host *
AddKeysToAgent yes
UseKeychain yes
IdentityFile ~/.ssh/id_rsa

6 — Enable swap memory on the instance

This server is TINY — it only has .6GB of RAM. When you compile a Phoenix application on it for the first time on it, there’s a good chance that it will freeze up on you by running out of resources. But, since we get a 30GB spinner hard drive (HDD) for free along with it, we can safely enable swap memory which will allow the server to push past the limit (albeit with slower memory) once it exceeds available RAM. To create 1GB of swap space, run the following on the server:

sudo fallocate -l 1G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile
echo ‘/swapfile none swap sw 0 0’ | sudo tee -a /etc/fstab

7 — Install Erlang, Elixir, Node, and Postgres

In case these instructions get stale, follow the official Elixir install guide for Ubuntu here. Install Erlang and Elixir using the following:

wget https://packages.erlang-solutions.com/erlang-solutions_2.0_all.deb && sudo dpkg -i erlang-solutions_2.0_all.debsudo apt updatesudo apt install esl-erlangsudo apt install elixir
sudo apt install nodejs npm
sudo apt install postgresql postgresql-contrib

8 — Set a secure password for Postgres

Now that Postgres is installed, we’ll want to change the password to something secure. Later, we’ll be opening the database port on the server to the web (alert: security tradeoff!), so the password has to be rock solid. Generate a random password with a tool like https://passwordsgenerator.net and paste it into a safe place. Log into Postgres with the default postgres user:

sudo -u postgres psql
\password postgres(paste the password you generated and hit enter)(now put \q and hit enter to quit)

9 — Configure Postgres to allow remote connections

It’s often helpful to be able to access your database with a GUI on your development machine. I use Postico, but TablePlus and pgAdmin work great too. But before you can use one of these to connect to your database, you’ll need to adjust the firewall in GCP and alter some Postgres settings to open the database port (5432) to the outside world.

Name: whatever you want, something like "database"
Targets:
All instances in the network
Source IP Ranges: 0.0.0.0/0
tcp: Check box, and enter 5432 in field
sudo find / -name "postgresql.conf"
sudo vim <the file path you found>
sudo vim <the file path you found>
host    all             all             0.0.0.0/0               md5
host all all ::/0 md5

10 — Connect to your remote Git repo where your project lives

Finally, we’re ready to put some code on this server! I’ve created an empty Phoenix repository on GitHub that you’re welcome to fork and follow along. All of the examples from here on out will use the project name my_app, so make sure to replace it anywhere it shows up with your own project name if you’re not forking and following along. To see the finished version of the code from this article (set up with a website called optimized.ml) you can look at this small pull request to the empty app.

git clone https://github.com/<YOUR_GITHUB_USERNAME>/my_app
# Ignore secrets files
/config/*.secret.exs
# Ignore local cert files
/tmp/

11 — Get your prod secrets onto the server

The next step is to move the prod.secret.exs file that you just created up to the server, using scp (which stands for “secure copy”). It works just like the ssh my_app command that we use to connect to the server, but you specify a local path and a remote path and it will transfer files up to the server (or down from the server). Close the SSH session to the server by typing exit, and enter the following in your local shell:

scp config/prod.secret.exs my_app:~/my_app/config/
MIX_ENV=prod mix deps.getMIX_ENV=prod mix ecto.create

12 — Configure SSL with SiteEncrypt

You absolutely need to secure your website, using SSL. And, the Elixir SiteEncrypt library which uses the Let’s Encrypt project makes it free and straightforward to set up. We’re going to get a Let’s Encrypt cert using only Elixir.

{:site_encrypt, "~> 0.4"}

13 — Forward ports

Before we can test that these changes worked, we need to make sure that our app is accessible to the internet. We’ll be running the app on port 4000, but since regular web requests come in on port 80 we’ll need to forward them over to 4000.

sudo iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 4000
sudo iptables -t nat -A PREROUTING -p tcp --dport 443 -j REDIRECT --to-port 4040
sudo apt install iptables-persistent
# get the new deps
MIX_ENV=prod mix deps.get
# get the assets ready
npm install --prefix ./assets
npm run deploy --prefix ./assets
MIX_ENV=prod mix phx.digest
# run the app on port 4000 and 4040
MIX_ENV=prod mix phx.server

14 — Build a deployment shell script

Now that you have an app running on your custom domain, with a database and SSL, you could decide that it’s good enough and find a quick hack to keep it running, like running Phoenix in the background or manually deploying releases.

vim deploy.sh
#!/bin/bash
set -e
# Update to latest version of code
cd /home/<YOUR_USERNAME>/my_app
git fetch
git reset --hard origin/main
mix deps.get --only prod
# Optional CI steps
mix test
# mix credo --strict (commented out, just another example step)
if System.get_env("CI") do
import_config "test.secret.exs"
end
# Set mix_env so subsequent mix steps don't need to specify it
export MIX_ENV=prod
# Build phase
npm install --prefix ./assets
npm run deploy --prefix ./assets
mix phx.digest
# Identify the currently running release
current_release=$(ls ../releases | sort -nr | head -n 1)
now_in_unix_seconds=$(date +'%s')
if [[ $current_release == '' ]]; then current_release=$now_in_unix_seconds; fi
# Create release
mix release --path ../releases/${now_in_unix_seconds}
# Get the HTTP_PORT variable from the currently running release
source ../releases/${current_release}/releases/0.1.0/env.sh
if [[ $HTTP_PORT == '4000' ]]
then
http=4001
https=4041
old_port=4000
else
http=4000
https=4040
old_port=4001
fi
# Put env vars with the ports to forward to, and set non-conflicting node name
echo "export HTTP_PORT=${http}" >> ../releases/${now_in_unix_seconds}/releases/0.1.0/env.sh
echo "export HTTPS_PORT=${https}" >> ../releases/${now_in_unix_seconds}/releases/0.1.0/env.sh
echo "export RELEASE_NAME=${http}" >> ../releases/${now_in_unix_seconds}/releases/0.1.0/env.sh
# Set the release to the new version
rm ../env_vars || true
touch ../env_vars
echo "RELEASE=${now_in_unix_seconds}" >> ../env_vars
# Run migrations
mix ecto.migrate
# Boot the new version of the app
sudo systemctl start my_app@${http}
# Wait for the new version to boot
until $(curl --output /dev/null --silent --head --fail localhost:${http}); do
echo 'Waiting for app to boot...'
sleep 1
done
# Switch forwarding of ports 443 and 80 to the ones the new app is listening on
sudo iptables -t nat -R PREROUTING 1 -p tcp --dport 80 -j REDIRECT --to-port ${http}
sudo iptables -t nat -R PREROUTING 2 -p tcp --dport 443 -j REDIRECT --to-port ${https}
# Stop the old version
sudo systemctl stop my_app@${old_port}
# Just in case the old version was started by systemd after a server
# reboot, also stop the server_reboot version
sudo systemctl stop my_app@server_reboot
echo 'Deployed!'
chmod u+x deploy.sh

15 — Create systemd services

Before we can actually run our deploy script, we need to create a systemd service that starts up the app (which basically just starts the release like <release_name>/bin/my_app start). The deploy.sh script we just created expects this service to be available — it calls it when it says sudo systemctl start my_app@${http}.

sudo vim /etc/systemd/system/my_app@.service
sudo systemctl enable my_app@server_reboot
sudo systemctl daemon-reload

16 — Deploy the app

We’ve come a long way and done a lot of work! We’re finally ready for that sweet sweet payout: running the deploy script and watching the app come to life.

ssh my_app ./deploy.sh

To really see the magic, commit and push a change locally and then run the command again, and you’ll see it gracefully swap over to the new version without any downtime 🎉🎉🎉.

If you’d like to see the Phoenix live dashboard, just add :prod to the list with [:dev, :test] at the bottom of your router.ex and deploy the app again.

17 — Attach to journald to watch logs, and configure to trim

Logs aren’t the most exciting part of building an app, but you’ll be grateful for them when you’re troubleshooting tough bugs. Because the deploy script uses a systemd service to actually run the command to start the release, standard output is routed to journald which stores the logs.

ssh my_app journalctl -f
ssh my_app journalctl -n 500
ssh my_app journalctl -n 50 -u my_app@4001
Nov 03 21:43:33 instance-1 systemd[1]: Stopped My App.
SystemMaxUse=4G
sudo systemctl restart systemd-journald.service

You made it! 🙌

Congratulations, you’re now armed with a fully loaded Phoenix app running on free hardware without the compromises. You can deploy new versions of your code with a single command, achieve zero-downtime deploys without hot code reloading, and tap into the database and logs from your development machine with ease.

Bonus 1: secure your data against the possibility of eventual hardware failure by creating another Google account with a free instance and set up automated backups using cron

I’m not going to go into exhaustive detail here, and this is not a sophisticated approach, but it can be a starting point.

sudo apt install postgresql postgresql-contrib
#hostname:port:database:username:password
<server IP address>:5432:my_app:postgres:<postgres password>
pg_dump -U postgres -h <server IP address> -p 5432 my_app > ~/my_app/my_app_hourly.bak

Bonus 2: Rollback script

Here’s a rollback script, you can put it in your home directory on the server next to deploy.sh and run it the same way: ssh my_app ./rollback.sh.

chmod u+x rollback.sh

Elixir dev building for the web with Phoenix

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store