Steve Grunwell

Open-source contributor, speaker, and electronics tinkerer

An exposed lightbulb sitting atop a pile of books

Atomic Deployments from Scratch

Years ago, a mentor of mine introduced me to a Ruby-based server automation tool called Capistrano, and I immediately fell in love. Ready to deploy a new release? Rungit push && cap production deploy, then you’re done. Even better, Capistrano introduced me to what’s colloquially known as “atomic deployments” — checking out a full copy of the codebase and using symlinks to point to the new release for a zero-downtime deployment — which has since been my gold standard for deployment methods.

I continued to use Capistrano for a few years, until I started working on projects (and teams) large enough to justify a proper continuous delivery (CD) tool. Suddenly, building the application locally and pushing up with Capistrano became more complicated; at the same time, services like DeployBot began offering atomic deployments right out of the box, so it was easy to get up and running.

What about services that don’t offer atomic deployments as a default? I recently deployed a Laravel application via Codeship, where atomic deployments to a VPS becomes more complicated; here’s how I approached it:

What is an atomic deployment?

Before we dig into how, it’s helpful to understand why we might want to use atomic deployments for our applications. Generally speaking, atomic deployments offer two benefits over a traditional “pull down anything that’s changed”-type deployment:

  1. Atomic deployments offer zero (or near-zero) downtime between releases; the “current” symlink isn’t updated until the new release is ready to go live.
  2. Atomic deployments make it easier to rollback to an earlier release, as the “current” symlink can quickly be updated to point to one of the previous releases.

The structure of an atomic deployment

The specific structure can vary between platform and implementation, but generally speaking an atomic deployment has three components:

  1. A number of releases, each containing a complete checkout of the built application.
  2. Some number of shared resources, which are linked to the releases using symlinks.
  3. A “current” symlink, which acts as [part of] the web root.

For a practical example, let’s take a look at a typical Capistrano-style setup:

In that directory listing, we have two copies of the codebase within the releases/ directory, each living within a directory named after the Unix timestamp of when the release was deployed.

The shared/ directory, meanwhile, contains things that should remain constant between releases — in Laravel’s case, this is typically the storage/ directory and the .env file.

Finally, the current symlink points at the latest release (releases/1525148402). Within our nginx configuration, we would set our application web root to /path/to/app/current/public, so the configuration is always using the public/ directory of the current release.

Under this model, when a new release is deployed the codebase will be checked out into a new, timestamped directory within releases/. Next, storage/ and .envwould be symlinked within the new release, and we may run any necessary database migrations. Finally, once the new release is ready, we’ll update the current symlink target and restart the web server in order for the release to go live.

If it turns out the new release is broken, we can pretty easily update the current symlink again to point to a previous, known working release. Pretty cool, huh?

Scripting an atomic deployment

Now that we’ve covered some of the benefits of atomic deployments (as well as how they’re physically structured), let’s talk about CD providers; unfortunately, not every tool offers atomic deployments out of the box, which means tools like Codeship and Jenkins (to name a few) may leave you to do a bit of manual scripting. Fear not, friend, for I’ve done the hard work for you!

Generally speaking, continuous integration (CI) and continuous delivery (CD) providers will break builds into two distinct steps:

  1. Build the application, ensuring that all necessary tests pass (CI)
  2. Deploy the new release (CD)

Working through the CI phase is a whole topic in itself, but let’s imagine you’ve set up a CI pipeline with the provider of your choice, and now you’re filling out the “when the build has succeeded, what should we do with it?” prompt.

In general, the process is going to look something like this:

  1. Ensure the application is built in a way that’s production ready; if you were previously including development dependencies for testing, you’ll want to remove those and install only what’s needed on production.
  2. Create a tarball of the release and transfer it to the production server(s).
  3. Connect to the production server(s), extract the tarball into your releases/ directory, create any necessary symlinks, and run any additional steps.
  4. Update the current symlink and restart the web server.

For the aforementioned Laravel application on Codeship, my atomic deployment script looks something like this (don’t worry, we’ll break it down):

For this application, I chose to keep the deployment script within the application codebase itself (in bin/deploy.sh), rather than putting it all in Codeship. It’s a bit of personal preference, but I’d rather the script be versioned with the rest of the app rather than thrown into a <textarea> in Codeship. Once my CI pipeline passes, I can call sh bin/deploy.sh <environment> to deploy!

Preparing for atomic deployments

Before we can perform atomic deployments, there are a few things we’ll need to do on our target server(s):

  1. Create the directory structure
  2. Create the deployment user w/ SSH key
  3. Grant the deployment user the ability to reload the web server

The way you go about this will depend on your server environment, but I typically like to run the app under a deploy (or similar) user who only has access to the app directory (in this example, /var/www/myapp):

Once the user has been created, we need to give them the public SSH key that corresponds to the private SSH key used by our CD platform. If the platform doesn’t provide a public key for you, you may need to generate a new SSH key and store the private key in an environment variable in the CD environment.

With the public key in-hand, add it to the deploy user’s ~/.ssh/authorized_keys file. This will allow the CD platform to SSH into the server, which we’ll need later.

The last step is to add limited privileges to the deploy user — if we’re updating the document root (via symlink), our web server needs to be reloaded.

This lets the deploy user run sudo systemctl reload nginx without granting access to any other commands — this corresponds to step 10 in our deployment script, and should be the last thing we do to make our new version live.

Environment variables

You’ll notice that my script starts with a comment outlining a few necessary environment variables ($TARGET_SERVER, $TARGET_DIR, and $TARGET_USER); these will enable me to change where (and who) the app will be deployed by without hard-coding these values into my deployment script. In the case of the setup work we did in the last section, $TARGET_DIR and $TARGET_USER will be /var/www/myapp and deploy, respectively, while $TARGET_SERVER will vary based on the environment we’re deploying to.

I’m also creating the $TIMESTAMP variable, which captures the Unix timestamp of when I first started running this script. This variable will end up being the directory name of my new release.

Building the archive

Next, my script removes the vendor/directory (which currently contains development dependencies like PHPUnit), then runs composer install --no-dev to pull in only what’s needed for production.

The script also removes files and directories that won’t be necessary on production, like tests/, phpunit.xml.dist, etc. This step is optional, but it can help reduce the size of the tarball and thus speed up your deployments.

Next, I create an archive of the application, using the $TIMESTAMP variable to determine the archive name (e.g. 1525148402.tgz). This tarball will then be copied to the $TARGET_SERVER via scp, and our local copy of the tarball removed.

Preparing the release

Now that the release has been copied to the production server(s), we need to do a few things to get it ready to go live:

  1. Extract the tarball to the releases/ directory
  2. Symlink any shared resources between the new release and the shared/ directory; in this case, we’re symlinking the storage/ directory and .env file.
  3. Run a few Artisan commands to get the release ready: perform any pending database migrations, refresh the configuration and view caches, etc.

Once the release is ready to go, the last thing we need to do is update the current symlink to point to the new release, then restart the web server. Assuming everything went smoothly, our new release should be up and running with little-to-no downtime!

Cleaning up after a release

As we (confidently) deploy over and over, with the ability to ship code as soon as it’s ready and with zero-downtime, we’ll quickly build up a library of releases on our server. While it’s great to have a couple releases we can roll back to if need be, we probably don’t need or want the entire release history clogging up our production server(s).

That’s where the last step of the deployment script comes in: after successfully updating the current symlink, our script will automatically sort the [timestamped] directories in the releases/ directory and keep the five latest (current + 4 previous), removing the rest. Eagle-eyed readers may recall me writing about this over on the Engineering @ Growella blog.

Wrapping Up

Hopefully this has given you a high-level look at how to start using atomic deployments for your applications. Every app is a little different and there are lots of great tools to handle this for you, but it’s entirely possible to deploy atomically using free tools like Travis CI, GitLab’s CI/CD pipelines, and more!

Previous

Routing phone calls to volunteers with Twilio

Next

Automatic, Whole-Home Time Machine Backups

2 Comments

  1. Fairuz WAN ISMAIL

    How about doing a rollback of database migrations?

    The app can be in a messed up state if we just doing a rollback of the application

    • Felipe Alvarado

      With laravel migrations you can rollback the db as you rollback the env

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Be excellent to each other.