Chapter 11: Deploying APIs

Design and Build Great Web APIs — by Mike Amundsen

Aditi Lonhari
7 min readJul 1, 2022

Now that we’ve completed our API testing and added OAuth support for security — it’s time to package, build, and deploy our API project onto a public server where others can use it.

The Basics of Deployment Pipelines

One of the most challenging elements to designing and building APIs is the act of deploying the completed work into production. In many organizations, this “release into production” step is so fraught with danger that they have a special set of rituals established to deal with it.

Releasing can be tough. There are lots of reasons for this, most of them historical, but it all boils down to some basic issues:

Too much time passes between releases.
Writing code and building software is a pretty detailed and exacting practice. Small changes can have big impacts on the system you’re working on. And as time passes, every software system adds more data, which affects performance and can result in new, unexpected behaviors. When the code for new features and bug fixes “sits around” for while, the tests used to validate them become less reliable, and that means the release package itself is less likely to succeed.

Most release packages are too big.
Like time, size affects release quality. The more changes in a single release, the less likely that release is to succeed. And if the release into production fails for some reason, the more changes in the release, the harder it can be to discover which of those changes (or which combination of those changes) are the cause of the failure.

There are too many variables to control for each release.
The number of changes in the release is just one variable in the process. Making sure all the tests were run on the same hardware as production, using the same version of the operating system, and using the same edition of the software framework, libraries, and utilities can be difficult to achieve. Teams often spend hours or days just re-creating a parallel production environment to run tests again before committing the changes to production, and still success is not guaranteed.

The Role of DevOps

How can companies improve their chances of success when releasing code into production? They tackle the three issues mentioned earlier with one single approach that combines the skills of both software development and IT operations. The name for this approach is DevOps (for development and operations).

The aim of DevOps is to reduce the time it takes to build and release changes into the system. DevOps does this by encouraging teams to create smaller release packages, release them more often, and automate as much of the process as possible to reduce the chances of variability.

Three practices cover the role of DevOps in deploying your app:

  • Continuous integration
  • Continuous delivery
  • Continuous deployment

They form a kind of ladder of DevOps maturity or advancement. Each rung of that ladder has a set of goals as well as a set of costs or challenges.

Continuous Integration

The first step on the ladder to more successful and reliable deployments is continuous integration. At this stage, everyone on the team adopts the practice of checking the code into the repository often. This means merging any changes into the master branch (or main trunk) of the repo. Those check-ins should kick off automated tests too.

By checking in your changes often and running automated tests for every check-in, you end up validating the project quite a few times and getting immediate success/fail feedback on each small set of changes. This means you catch problems early, when they’re easier to fix. Using scripted/automated tests means your tests are more consistent and the results are more reliable.

At this first step, companies still deploy to a final staging server and then on to production using manual processes and checklists. Relying on manual deployment like this has drawbacks, but at least the work of automating building the code package and testing it is greatly diminished, and that can improve the odds of success for the release.

Continuous integration handles the coding, check-in, and testing steps.

Continuous Delivery

The second step on the ladder is continuous delivery. At this point, the process of releasing into final staging is automated through scripting. That means deployment is reduced to making some selections (or configurations) and pressing a button. Now, along with scripted testing from continuous integration you also have scripted deployment to the staging level.

By scripting the deployment process, you greatly reduce the variability from one release to the next. The deployment script makes sure the proper baseline hardware and operating systems are used for the build along with all the correct libraries, frameworks, and other software. The script also validates any configuration data, confirms integration and connection to other services, and does a quick set of tests to make sure everything was deployed as expected. Even better, good deployment scripting tools will be able to detect when the deployment did not go as planned and will be able to back out all changes and return the system to the last known working version.

That’s quite a bit of work to automate, but it’s not at all impossible. Lots of products are available today to help DevOps teams get this done.

At the continuous delivery stage, the deployment is scripted but not automatic. Deploying into final production is initiated by someone somewhere pressing the Start button or launching a script.

Continuous delivery handles the coding, check-in, testing, and staging steps.

Continuous Deployment

The third rung on our DevOps ladder is called continuous deployment. At this stage, we’re not just scripting testing and staging deployment. We’re also making deployment into production automatic. That means making the entire process of testing and deploying your app completely driven by scripts and other tooling without the need for a human to “press a button.” Typically this is done by setting up your source code check-in process to handle the entire test-and-deploy process.

The test-and-deploy process usually looks like this:

  • A developer checks code into the repository.
  • That check-in kicks off a series of local tests.
  • If the tests pass, the code is built into a release package.
  • If the build succeeds, the build is deployed to a staging server.
  • If the staging server deployment succeeds, another set of integration tests are run.
  • If the integration tests succeed, the build is deployed on a production server.
  • If the production deployment succeeds, the job is done.

Of course, at each step there is a possibility of the process failing (failed local tests, failed build, failed staging deployment, and so on). And, in some of the failures, the update needs to be “backed out” or reversed. This is especially true for the production deployment. When that fails, the automated system needs to restore the “old” build into production in order to maintain stability.

That’s quite a bit of work, and a number of tools and services are available to handle it all. Your company may have the tools and processes in place to support continuous deployment. You’ll need to check with your team to learn more about where they are on this ladder of DevOps maturity.

Deploying with Heroku

To automate on-demand deployment we need to manage lots of seemingly tedious details, such as making sure the app is deployed using the right hardware platform, the correct version of the host operating system, the proper web framework and library dependencies, and so forth. It can be a very daunting task to get that all correct, especially for developers who don’t already know this level of detail on their own systems, let alone the systems running in production.

Luckily, thousands of developers and systems operators before us have boiled the process of continuous delivery down to a stable set of tasks to perform for each deployment. Even better for us, we can take advantage of this accumulated knowledge by using tools and platforms purpose-built for solving this problem. The one we’ll be using is the Heroku cloud platform.

The Heroku Platform

Heroku is a cloud-based Platform-as-a-Service (PaaS). It’s designed to host Internet-based applications and has lots of tools and services to make that kind of work safe, easy, and reliable.[93] Started in 2007 by a handful of enterprising Ruby language developers, Heroku has transformed into a company that supports most of the major web programming languages and frameworks.

One of the key elements of Heroku’s technology is its use of what it calls dynos. These dynos are small operating-system units based on Linux. They’re meant to mimic a full-blown instance of Linux, but they do it in a very cost-effective and resource-efficient way. Each app you deploy on Heroku runs on one or more dynos. Your app is contained within these dynos, which are easy to build, start, stop, and even duplicate in real time.

Heroku’s dynos are sometimes just called containers. Other popular container-based platforms include Docker, Apache Mesos, and CoreOS. They all work on the same idea: that deployment can be safer and easier if you base your releases on lightweight Linux containers.

Git-Based Deployment on Heroku

There are a couple of different ways to deploy to the Heroku platform. There is a very easy and reliable way to use your Git repository as your deployment package along with Heroku’s command-line tool. This will take advantage of a bunch of built-in Heroku deployment tools and do it all from the command-line so that it’ll be easy to include in any other scripts you might use to customize your own DevOps deployment process.

To learn more about Heroku, go to their website — https://www.heroku.com/ — for more details on implementation.

--

--