Continuous Deployment For Rust Applications
- 7923 words
- 40 min
This article is a sample from Zero To Production In Rust, a hands-on introduction to backend development in Rust.
You can get a copy of the book at zero2prod.com.
Chapter #5 - Going Live
- We Must Talk About Deployments
- Choosing Our Tools
- A Dockerfile For Our Application
- Deploy To DigitalOcean Apps Platform
- Summary
We have a working prototype of our newsletter API - it is now time to take it live.
We will learn how to package our Rust application as a Docker container to deploy it on DigitalOcean's App Platform.
At the end of the chapter we will have a Continuous Deployment (CD) pipeline: every commit to the main
branch will automatically trigger the deployment of the latest version of the application to our users.
Discuss the article on HackerNews or r/rust.
1. We Must Talk About Deployments
Everybody loves to talk about how important it is to deploy software to production as often as possible (and I put myself in that bunch!).
"Get customer feedback early!"
"Ship often and iterate on the product!"
But nobody shows you how.
Pick a random book on web development or an introduction to framework XYZ.
Most will not dedicate more than a paragraph to the topic of deployments.
A few will have a chapter about it - usually towards the end of the book, the part you never get to actually read.
A handful actually give it the space it deserves, as early as they reasonably can.
Why?
Because deployments are (still) a messy business.
There are many vendors, most are not straight-forward to use and what is considered state-of-art or best-practice tends to change really quickly1.
That is why most authors steer away from the topic: it takes many pages and it is painful to write something down to realise, one or two years later, that it is already out of date.
Nonetheless deployments are a prominent concern in the daily life of a software engineer - e.g. it is difficult to talk about database schema migrations, domain validation and API evolution without taking into account your deployment process.
We simply cannot ignore the topic in a book called Zero To Production.
2. Choosing Our Tools
The purpose of this chapter is to get you to experience, first hand, what it means to actually deploy on every commit to your main
branch.
That is why we are talking about deployment as early as chapter five: to give you the chance to practice this muscle for the rest of the book, as you would actually be doing if this was a real commercial project.
We are particularly interested, in fact, on how the engineering practice of continuous deployment influences our design choices and development habits.
At the same time, building the perfect continuous deployment pipeline is not the focus of the book - it deserves a book on its own, probably a whole company.
We have to be pragmatic and strike a balance between intrinsic usefulness (i.e. learn a tool that is valued in the industry) and developer experience.
And even if we spent the time to hack together the "best" setup, you are still likely to end up choosing different tools and different vendors due to the specific constraints of your organisation.
What matters is the underlying philosophy and getting you to try continuous deployment as a practice.
2.1. Virtualisation: Docker
Our local development environment and our production environment serve two very different purposes.
Browsers, IDEs, our music playlists - they can co-exist on our local machine. It is a multi-purpose workstation.
Production environments, instead, have a much narrower focus: running our software to make it available to our users. Anything that is not strictly related to that goal is either a waste of resources, at best, or a security liability, at worst.
This discrepancy has historically made deployments fairly troublesome, leading to the now meme-fied complaint "It works on my machine!".
It is not enough to copy the source code to our production servers.
Our software is likely to make assumptions on the capabilities exposed by the underlying operating system (e.g. a native Windows application will not run on Linux), on the availability of other software on the same machine (e.g. a certain version of the Python interpreter) or on its configuration (e.g. do I have root permissions?).
Even if we started with two identical environments we would, over time, run into troubles as versions drift and subtle inconsistencies come up to haunt our nights and weekends.
The easiest way to ensure that our software runs correctly is to tightly control the environment it is being executed into.
This is the fundamental idea behind virtualisation technology: what if, instead of shipping code to production, you could ship a self-contained environment that included your application?!
It would work great for both sides: less Friday-night surprises for you, the developer; a consistent abstraction to build on top of for those in charge of the production infrastructure.
Bonus points if the environment itself can be specified as code to ensure reproducibility.
The nice thing about virtualisation is that it exists and it has been mainstream for almost a decade now.
As for most things in technology, you have a few options to choose from depending on your needs: virtual machines, containers (e.g. Docker) and a few others (e.g. Firecracker).
We will go with the mainstream and ubiquitous option - Docker containers.
2.2. Hosting: DigitalOcean
AWS, Google Cloud, Azure, Digital Ocean, Clever Cloud, Heroku, Qovery...
The list of vendors you can pick from to host your software goes on and on.
People have made a successful business out of recommending the best cloud tailored to your specific needs and usecases - not my job (yet) or the purpose of this book.
We are looking for something that is easy to use (great developer experience, minimal unnecessary complexity) and fairly established.
In November 2020, the intersection of those two requirements seems to be Digital Ocean, in particular their newly launched App Platform proposition.
Disclaimer: Digital Ocean is not paying me to promote their services here.
3. A Dockerfile For Our Application
DigitalOcean's App Platform has native support for deploying containerised applications.
This is going to be our first task: we have to write a Dockerfile to build and execute our application as a Docker container.
3.1. Dockerfiles
A Dockerfile is a recipe for your application environment.
They are organised in layers: you start from a base image (usually an OS enriched with a programming language toolchain) and execute a series of commands (COPY
, RUN
, etc.), one after the other, to build the environment you need.
Let's have a look at the simplest possible Dockerfile for a Rust project:
# We use the latest Rust stable release as base image
FROM rust:1.59.0
# Let's switch our working directory to `app` (equivalent to `cd app`)
# The `app` folder will be created for us by Docker in case it does not
# exist already.
WORKDIR /app
# Install the required system dependencies for our linking configuration
RUN apt update && apt install lld clang -y
# Copy all files from our working environment to our Docker image
COPY . .
# Let's build our binary!
# We'll use the release profile to make it faaaast
RUN cargo build --release
# When `docker run` is executed, launch the binary!
ENTRYPOINT ["./target/release/zero2prod"]
Save it in a file named Dockerfile
in the root directory of our git repository:
zero2prod/
.github/
migrations/
scripts/
src/
tests/
.gitignore
Cargo.lock
Cargo.toml
configuration.yaml
Dockerfile
The process of executing those commands to get an image is called building.
Using the Docker CLI:
# Build a docker image tagged as "zero2prod" according to the recipe
# specified in `Dockerfile`
docker build --tag zero2prod --file Dockerfile .
What does the .
at the end of the command stand for?
3.2. Build Context
docker build
generates an image starting from a recipe (the Dockerfile) and a build context.
You can picture the Docker image you are building as its own fully isolated environment.
The only point of contact between the image and your local machine are commands like COPY
or ADD
2: the build context determines what files on your host machine are visible inside the Docker container to COPY
and its friends.
Using .
we are telling Docker to use the current directory as the build context for this image; COPY . app
will therefore copy all files from the current directory (including our source code!) into the app
directory of our Docker image.
Using .
as build context implies, for example, that Docker will not allow COPY
to see files from the parent directory or from arbitrary paths on your machine into the image.
You could use a different path or even a URL (!) as build context depending on your needs.
3.3. Sqlx Offline Mode
If you were eager enough, you might have already launched the build command... just to realise it doesn't work!
docker build --tag zero2prod --file Dockerfile .
# [...]
Step 4/5 : RUN cargo build --release
# [...]
error: error communicating with the server:
Cannot assign requested address (os error 99)
--> src/routes/subscriptions.rs:35:5
|
35 | / sqlx::query!(
36 | | r#"
37 | | INSERT INTO subscriptions (id, email, name, subscribed_at)
38 | | VALUES ($1, $2, $3, $4)
... |
43 | | Utc::now()
44 | | )
| |_____^
|
= note: this error originates in a macro
What is going on?
sqlx
calls into our database at compile-time to ensure that all queries can be successfully executed considering the schemas of our tables.
When running cargo build
inside our Docker image, though, sqlx
fails to establish a connection with the database that the DATABASE_URL
environment variable in the .env
file points to.
How do we fix it?
We could allow our image to talk to a database running on our local machine at build time using the --network
flag. This is the strategy we follow in our CI pipeline given that we need the database anyway to run our integration tests.
Unfortunately it is somewhat troublesome to pull off for Docker builds due to how Docker networking is implemented on different operating systems (e.g. MacOS) and would significantly compromise how reproducible our builds are.
A better option is to use the newly-introduced offline mode for sqlx
.
Let's add the offline
feature to sqlx
in our Cargo.toml
:
#! Cargo.toml
# [...]
# Using table-like toml syntax to avoid a super-long line!
[dependencies.sqlx]
version = "0.6"
default-features = false
features = [
"runtime-actix-rustls",
"macros",
"postgres",
"uuid",
"chrono",
"migrate",
"offline"
]
The next step relies on sqlx
's CLI. The command we are looking for is sqlx prepare
. Let's look at its help message:
sqlx prepare --help
sqlx-prepare
Generate query metadata to support offline compile-time verification.
Saves metadata for all invocations of `query!` and related macros to
`sqlx-data.json` in the current directory, overwriting if needed.
During project compilation, the absence of the `DATABASE_URL` environment
variable or the presence of `SQLX_OFFLINE` will constrain the compile-time
verification to only read from the cached query metadata.
USAGE:
sqlx prepare [FLAGS] [-- <args>...]
ARGS:
<args>...
Arguments to be passed to `cargo rustc ...`
FLAGS:
--check
Run in 'check' mode. Exits with 0 if the query metadata is up-to-date.
Exits with 1 if the query metadata needs updating
In other words, prepare
performs the same work that is usually done when cargo build
is invoked but it saves the outcome of those queries to a metadata file (sqlx-data.json
) which can later be detected by sqlx
itself and used to skip the queries altogether and perform an offline build.
Let's invoke it!
# It must be invoked as a cargo subcommand
# All options after `--` are passed to cargo itself
# We need to point it at our library since it contains
# all our SQL queries.
cargo sqlx prepare -- --lib
query data written to `sqlx-data.json` in the current directory;
please check this into version control
We will indeed commit the file to version control, as the command output suggests.
Let's set the SQLX_OFFLINE
environment variable to true
in our Dockerfile to force sqlx
to look at the saved metadata instead of trying to query a live database:
FROM rust:1.59.0
WORKDIR /app
RUN apt update && apt install lld clang -y
COPY . .
ENV SQLX_OFFLINE true
RUN cargo build --release
ENTRYPOINT ["./target/release/zero2prod"]
Let's try again to build our Docker container:
docker build --tag zero2prod --file Dockerfile .
There should be no errors this time!
We have a problem though: how do we ensure that sqlx-data.json
does not go out of sync (e.g. when the schema of our database changes or when we add new queries)?
We can use the --check
flag in our CI pipeline to ensure that it stays up-to-date - check the updated pipeline definition in the book GitHub repository as a reference.
3.4. Running An Image
When building our image we attached a tag to it, zero2prod
:
docker build --tag zero2prod --file Dockerfile .
We can use the tag to refer to the image in other commands. In particular, to run it:
docker run zero2prod
docker run
will trigger the execution of the command we specified in our ENTRYPOINT
statement:
ENTRYPOINT ["./target/release/zero2prod"]
In our case, it will execute our binary therefore launching our API.
Let's launch our image then!
You should immediately see an error:
thread 'main' panicked at
'Failed to connect to Postgres:
Io(Os {
code: 99,
kind: AddrNotAvailable,
message: "Cannot assign requested address"
})'
This is coming from this line in our main
function:
//! src/main.rs
//! [...]
#[tokio::main]
async fn main() -> std::io::Result<()> {
// [...]
let connection_pool = PgPool::connect(
&configuration.database.connection_string().expose_secret()
)
.await
.expect("Failed to connect to Postgres.");
// [...]
}
We can relax our requirements by using connect_lazy
- it will only try to establish a connection when the pool is used for the first time.
//! src/main.rs
//! [...]
#[tokio::main]
async fn main() -> std::io::Result<()> {
// [...]
// No longer async, given that we don't actually try to connect!
let connection_pool = PgPool::connect_lazy(
&configuration.database.connection_string().expose_secret()
)
.expect("Failed to create Postgres connection pool.");
// [...]
}
We can now re-build the Docker image and run it again: you should immediately see a couple of log lines! Let's open another terminal and try to make a request to our health check endpoint:
curl http://127.0.0.1:8000/health_check
curl: (7) Failed to connect to 127.0.0.1 port 8000: Connection refused
Not great.
3.5. Networking
By default, Docker images do not expose their ports to the underlying host machine. We need to do it explicitly using the -p
flag.
Let's kill our running image to launch it again using:
docker run -p 8000:8000 zero2prod
Trying to hit the health check endpoint will trigger the same error message.
We need to dig into our main.rs
file to understand why:
//! src/main.rs
use zero2prod::configuration::get_configuration;
use zero2prod::startup::run;
use zero2prod::telemetry::{get_subscriber, init_subscriber};
use sqlx::postgres::PgPool;
use std::net::TcpListener;
#[tokio::main]
async fn main() -> std::io::Result<()> {
let subscriber = get_subscriber("zero2prod".into(), "info".into(), std::io::stdout);
init_subscriber(subscriber);
let configuration = get_configuration().expect("Failed to read configuration.");
let connection_pool = PgPool::connect_lazy(
&configuration.database.connection_string().expose_secret()
)
.expect("Failed to create Postgres connection pool.");
let address = format!("127.0.0.1:{}", configuration.application_port);
let listener = TcpListener::bind(address)?;
run(listener, connection_pool)?.await?;
Ok(())
}
We are using 127.0.0.1
as our host in address
- we are instructing our application to only accept connections coming from the same machine.
However, we are firing a GET request to /health_check
from the host machine, which is not seen as local by our Docker image, therefore triggering the Connection refused
error we have just seen.
We need to use 0.0.0.0
as host to instruct our application to accept connections from any network interface, not just the local one.
We should be careful though: using 0.0.0.0
significantly increases the "audience" of our application, with some security implications.
The best way forward is to make the host portion of our address
configurable - we will keep using 127.0.0.1
for our local development and set it to 0.0.0.0
in our Docker images.
3.6. Hierarchical Configuration
Our Settings
struct currently looks like this:
//! src/configuration.rs
// [...]
#[derive(serde::Deserialize)]
pub struct Settings {
pub database: DatabaseSettings,
pub application_port: u16,
}
#[derive(serde::Deserialize)]
pub struct DatabaseSettings {
pub username: String,
pub password: Secret<String>,
pub port: u16,
pub host: String,
pub database_name: String,
}
// [...]
Let's introduce another struct, ApplicationSettings
, to group together all configuration values related to our application address:
#[derive(serde::Deserialize)]
pub struct Settings {
pub database: DatabaseSettings,
pub application: ApplicationSettings,
}
#[derive(serde::Deserialize)]
pub struct ApplicationSettings {
pub port: u16,
pub host: String,
}
// [...]
We need to update our configuration.yml
file to match the new structure:
#! configuration.yml
application:
port: 8000
host: 127.0.0.1
database:
# [...]
as well as our main.rs
, where we will leverage the new configurable host
field:
//! src/main.rs
// [...]
#[tokio::main]
async fn main() -> std::io::Result<()> {
// [...]
let address = format!(
"{}:{}",
configuration.application.host, configuration.application.port
);
// [...]
}
The host is now read from configuration, but how do we use a different value for different environments?
We need to make our configuration hierarchical.
Let's have a look at get_configuration
, the function in charge of loading our Settings
struct:
//! src/configuration.rs
// [...]
pub fn get_configuration() -> Result<Settings, config::ConfigError> {
// Initialise our configuration reader
let settings = config::Config::builder()
// Add configuration values from a file named `configuration.yaml`.
.add_source(config::File::new("configuration.yaml", config::FileFormat::Yaml))
.build()?;
// Try to convert the configuration values it read into
// our Settings type
settings.try_deserialize::<Settings>()
}
We are reading from a file named configuration
to populate Settings
's fields. There is no further room for tuning the values specified in our configuration.yaml
.
Let's take a more refined approach. We will have:
- A base configuration file, for values that are shared across our local and production environment (e.g. database name);
- A collection of environment-specific configuration files, specifying values for fields that require customisation on a per-environment basis (e.g. host);
- An environment variable,
APP_ENVIRONMENT
, to determine the running environment (e.g.production
orlocal
).
All configuration files will live in the same top-level directory, configuration
.
The good news is that config
, the crate we are using, supports all the above out of the box!
Let's put it together:
//! src/configuration.rs
// [...]
pub fn get_configuration() -> Result<Settings, config::ConfigError> {
let base_path = std::env::current_dir().expect("Failed to determine the current directory");
let configuration_directory = base_path.join("configuration");
// Detect the running environment.
// Default to `local` if unspecified.
let environment: Environment = std::env::var("APP_ENVIRONMENT")
.unwrap_or_else(|_| "local".into())
.try_into()
.expect("Failed to parse APP_ENVIRONMENT.");
let environment_filename = format!("{}.yaml", environment.as_str());
let settings = config::Config::builder()
.add_source(config::File::from(configuration_directory.join("base.yaml")))
.add_source(config::File::from(configuration_directory.join(&environment_filename)))
.build()?;
settings.try_deserialize::<Settings>()
}
/// The possible runtime environment for our application.
pub enum Environment {
Local,
Production,
}
impl Environment {
pub fn as_str(&self) -> &'static str {
match self {
Environment::Local => "local",
Environment::Production => "production",
}
}
}
impl TryFrom<String> for Environment {
type Error = String;
fn try_from(s: String) -> Result<Self, Self::Error> {
match s.to_lowercase().as_str() {
"local" => Ok(Self::Local),
"production" => Ok(Self::Production),
other => Err(format!(
"{} is not a supported environment. Use either `local` or `production`.",
other
)),
}
}
}
Let's refactor our configuration file to match the new structure.
We have to get rid of configuration.yaml
and create a new configuration
directory with base.yaml
, local.yaml
and production.yaml
inside.
#! configuration/base.yaml
application:
port: 8000
database:
host: "localhost"
port: 5432
username: "postgres"
password: "password"
database_name: "newsletter"
#! configuration/local.yaml
application:
host: 127.0.0.1
#! configuration/production.yaml
application:
host: 0.0.0.0
We can now instruct the binary in our Docker image to use the production configuration by setting the APP_ENVIRONMENT
environment variable with an ENV
instruction:
FROM rust:1.59.0
WORKDIR /app
RUN apt update && apt install lld clang -y
COPY . .
ENV SQLX_OFFLINE true
RUN cargo build --release
ENV APP_ENVIRONMENT production
ENTRYPOINT ["./target/release/zero2prod"]
Let's rebuild our image and launch it again:
docker build --tag zero2prod --file Dockerfile .
docker run -p 8000:8000 zero2prod
One of the first log lines should be something like
{
"name":"zero2prod",
"msg":"Starting \"actix-web-service-0.0.0.0:8000\" service on 0.0.0.0:8000",
...
}
If it is, good news - our configuration works as expected!
Let's try again to hit the health check endpoint:
curl -v http://127.0.0.1:8000/health_check
curl -v http://127.0.0.1:8000/health_check
> GET /health_check HTTP/1.1
> Host: 127.0.0.1:8000
> User-Agent: curl/7.61.0
> Accept: */*
>
< HTTP/1.1 200 OK
< content-length: 0
< date: Sun, 01 Nov 2020 17:32:19 GMT
It works, awesome!
3.7. Database Connectivity
What about POST /subscriptions
?
curl --request POST --data 'name=le%20guin&email=ursula_le_guin%40gmail.com' 127.0.0.1:8000/subscriptions --verbose
A long wait, then a 500!
Let's look at the application logs (useful, aren't they?)
{
"msg": "[SAVING NEW SUBSCRIBER DETAILS IN THE DATABASE - EVENT] \
Failed to execute query: PoolTimedOut",
...
}
This should not come as a surprise - we swapped connect
with connect_lazy
to avoid dealing with the database straight away.
It took us half a minute to see a 500 coming back - that is because 30 seconds is the default timeout to acquire a connection from the pool in sqlx
.
Let's fail a little faster by using a shorter timeout:
//! src/main.rs
use sqlx::postgres::PgPoolOptions;
// [...]
#[tokio::main]
async fn main() -> std::io::Result<()> {
// [...]
let connection_pool = PgPoolOptions::new()
.acquire_timeout(std::time::Duration::from_secs(2))
.connect_lazy(
&configuration.database.connection_string().expose_secret()
)
.expect("Failed to create Postgres connection pool.");
// [...]
}
There are various ways to get a working local setup using Docker containers:
- Run the application container with
--network=host
, as we are currently doing for the Postgres container; - Use
docker-compose
; - Create a user-defined network.
A working local setup does not get us any closer to having a working database connection when deployed on Digital Ocean. We will therefore let it be for now.
3.8. Optimising Our Docker Image
As far as our Docker image is concerned, it seems to work as expected - time to deploy it!
Well, not yet.
There are two optimisations we can make to our Dockerfile to make our life easier going forward:
- smaller image size for faster usage;
- Docker layer caching for faster builds.
3.8.1. Docker Image Size
We will not be running docker build
on the machines hosting our application. They will be using docker pull
to download our Docker image without going through the process of building it from scratch.
This is extremely convenient: it can take quite a long time to build our image (and it certainly does in Rust!) and we only need to pay that cost once.
To actually use the image we only need to pay for its download cost which is directly related to its size.
How big is our image?
We can find out using
docker images zero2prod
REPOSITORY TAG SIZE
zero2prod latest 2.31GB
Is that big or small?
Well, our final image cannot be any smaller than the image we used as base - rust:1.59.0
. How big is that?
docker images rust:1.59.0
REPOSITORY TAG SIZE
rust 1.59.0 1.29GB
Ok, our final image is almost twice as heavy as our base image.
We can do much better than that!
Our first line of attack is reducing the size of the Docker build context by excluding files that are not needed to build our image.
Docker looks for a specific file in our project to determine what should be ignored - .dockerignore
Let's create one in the root directory with the following content:
.env
target/
tests/
Dockerfile
scripts/
migrations/
All files that match the patterns specified in .dockerignore
are not sent by Docker as part of the build context to the image, which means they will not be in scope for COPY
instructions.
This will massively speed up our builds (and reduce the size of the final image) if we get to ignore heavy directories (e.g. the target
folder for Rust projects).
The next optimisation, instead, leverages one of Rust's unique strengths.
Rust's binaries are statically linked3 - we do not need to keep the source code or intermediate compilation artifacts around to run the binary, it is entirely self-contained.
This plays nicely with multi-stage builds, a useful Docker feature. We can split our build in two stages:
- a
builder
stage, to generate a compiled binary; - a
runtime
stage, to run the binary.
The modified Dockerfile looks like this:
# Builder stage
FROM rust:1.59.0 AS builder
WORKDIR /app
RUN apt update && apt install lld clang -y
COPY . .
ENV SQLX_OFFLINE true
RUN cargo build --release
# Runtime stage
FROM rust:1.59.0 AS runtime
WORKDIR /app
# Copy the compiled binary from the builder environment
# to our runtime environment
COPY --from=builder /app/target/release/zero2prod zero2prod
# We need the configuration file at runtime!
COPY configuration configuration
ENV APP_ENVIRONMENT production
ENTRYPOINT ["./zero2prod"]
runtime
is our final image.
The builder
stage does not contribute to its size - it is an intermediate step and it is discarded at the end of the build. The only piece of the builder
stage that is found in the final artifact is what we explicitly copy over - the compiled binary!
What is the image size using the above Dockerfile?
docker images zero2prod
REPOSITORY TAG SIZE
zero2prod latest 1.3GB
Just 20 MBs bigger than the size of our base image, much better!
We can go one step further: instead of using rust:1.59.0
for our runtime
stage we can switch to rust:1.59.0-slim
, a smaller image using the same underlying OS.
# [...]
# Runtime stage
FROM rust:1.59.0-slim as runtime
# [...]
docker images zero2prod
REPOSITORY TAG SIZE
zero2prod latest 681MB
That is 4x smaller than what we had at the beginning - not bad at all!
We can go even smaller by shaving off the weight of the whole Rust toolchain and machinery (i.e. rustc
, cargo
, etc) - none of that is needed to run our binary.
We can use the bare operating system as base image (debian:bullseye-slim
) for our runtime stage:
# [...]
# Runtime stage
FROM debian:bullseye-slim AS runtime
WORKDIR /app
# Install OpenSSL - it is dynamically linked by some of our dependencies
# Install ca-certificates - it is needed to verify TLS certificates
# when establishing HTTPS connections
RUN apt-get update -y \
&& apt-get install -y --no-install-recommends openssl ca-certificates \
# Clean up
&& apt-get autoremove -y \
&& apt-get clean -y \
&& rm -rf /var/lib/apt/lists/*
COPY --from=builder /app/target/release/zero2prod zero2prod
COPY configuration configuration
ENV APP_ENVIRONMENT production
ENTRYPOINT ["./zero2prod"]
docker images zero2prod
REPOSITORY TAG SIZE
zero2prod latest 88.1MB
Less than a 100 MBs - ~25x smaller than our initial attempt4.
We could go even smaller by using rust:1.59.0-alpine
, but we would have to cross-compile to the linux-musl
target - out of scope for now. Check out rust-musl-builder
if you are interested in generating tiny Docker images.
Another option to reduce the size of our binary further is stripping symbols from it - you can find more information about it here.
3.8.2. Caching For Rust Docker Builds
Rust shines at runtime, consistently delivering great performance, but it comes at a cost: compilation times. They have been consistently among the top answers in the Rust annual survey when it comes to the biggest challenges or problems for the Rust project.
Optimised builds (--release
), in particular, can be gruesome - up to 15/20 minutes on medium projects with several dependencies. Quite common on web development projects like ours that are pulling in many foundational crates from the async ecosystem (tokio, actix-web, sqlx, etc.).
Unfortunately, --release
is what we use in our Dockerfile
to get top-performance in our production environment. How can we mitigate the pain?
We can leverage another Docker feature: layer caching.
Each RUN
, COPY
and ADD
instruction in a Dockerfile creates a layer: a diff between the previous state (the layer above) and the current state after having executed the specified command.
Layers are cached: if the starting point of an operation has not changed (e.g. the base image) and the command itself has not changed (e.g. the checksum of the files copied by COPY
) Docker does not perform any computation and directly retrieves a copy of the result from the local cache.
Docker layer caching is fast and can be leveraged to massively speed up Docker builds.
The trick is optimising the order of operations in your Dockerfile: anything that refers to files that are changing often (e.g. source code) should appear as late as possible, therefore maximising the likelihood of the previous step being unchanged and allowing Docker to retrieve the result straight from the cache.
The expensive step is usually compilation.
Most programming languages follow the same playbook: you COPY
a lock-file of some kind first, build your dependencies, COPY
over the rest of your source code and then build your project.
This guarantees that most of the work is cached as long as your dependency tree does not change between one build and the next.
In a Python project, for example, you might have something along these lines:
FROM python:3
COPY requirements.txt
RUN pip install -r requirements.txt
COPY src/ /app
WORKDIR /app
ENTRYPOINT ["python", "app"]
cargo
, unfortunately, does not provide a mechanism to build your project dependencies starting from its Cargo.lock
file (e.g. cargo build --only-deps
).
Once again, we can rely on a community project to expand cargo
's default capability: cargo-chef
5.
Let's modify our Dockerfile as suggested in cargo-chef
's README
:
FROM lukemathwalker/cargo-chef:latest-rust-1.59.0 as chef
WORKDIR /app
RUN apt update && apt install lld clang -y
FROM chef as planner
COPY . .
# Compute a lock-like file for our project
RUN cargo chef prepare --recipe-path recipe.json
FROM chef as builder
COPY --from=planner /app/recipe.json recipe.json
# Build our project dependencies, not our application!
RUN cargo chef cook --release --recipe-path recipe.json
# Up to this point, if our dependency tree stays the same,
# all layers should be cached.
COPY . .
ENV SQLX_OFFLINE true
# Build our project
RUN cargo build --release --bin zero2prod
FROM debian:bullseye-slim AS runtime
WORKDIR /app
RUN apt-get update -y \
&& apt-get install -y --no-install-recommends openssl ca-certificates \
# Clean up
&& apt-get autoremove -y \
&& apt-get clean -y \
&& rm -rf /var/lib/apt/lists/*
COPY --from=builder /app/target/release/zero2prod zero2prod
COPY configuration configuration
ENV APP_ENVIRONMENT production
ENTRYPOINT ["./zero2prod"]
We are using three stages: the first computes the recipe file, the second caches our dependencies and then builds our binary, the third is our runtime environment.
As long as our dependencies do not change the recipe.json
file will stay the same, therefore the outcome of cargo chef cook --release --recipe-path recipe.json
will be cached, massively speeding up our builds.
We are taking advantage of how Docker layer caching interacts with multi-stage builds: the COPY . .
statement in the planner
stage will invalidate the cache for the planner
container, but it will not invalidate the cache for the builder
container as long as the checksum of the recipe.json
returned by cargo chef prepare
does not change.
You can think of each stage as its own Docker image with its own caching - they only interact with each other when using the COPY --from
statement.
This will save us a massive amount of time in the next section.
4. Deploy To DigitalOcean Apps Platform
We have built a (damn good) containerised version of our application. Let's deploy it now!
4.1. Setup
You have to sign up on Digital Ocean's website.
Once you have an account install doctl
, Digital Ocean's CLI - you can find instructions here.
Hosting on Digital Ocean's App Platform is not free - keeping our app and its associated database up and running costs roughly 20 USD/month.
I suggest you to destroy the app at the end of each session - it should keep your spend way below 1 USD. I spent 0.20 USD while playing around with it to write this chapter!
4.2. App Specification
Digital Ocean's App Platform uses a declarative configuration file to let us specify what our application deployment should look like - they call it App Spec.
Looking at the reference documentation, as well as some of their examples, we can piece together a first draft of what our App Spec looks like.
Let's put this manifest, spec.yaml
, at the root of our project directory.
#! spec.yaml
name: zero2prod
# Check https://www.digitalocean.com/docs/app-platform/#regional-availability
# for a list of all the available options.
# You can get region slugs from
# https://www.digitalocean.com/docs/platform/availability-matrix/
# They must specified lowercased.
# `fra` stands for Frankfurt (Germany - EU)
region: fra
services:
- name: zero2prod
# Relative to the repository root
dockerfile_path: Dockerfile
source_dir: .
github:
# Depending on when you created the repository,
# the default branch on GitHub might have been named `master`
branch: main
# Deploy a new version on every commit to `main`!
# Continuous Deployment, here we come!
deploy_on_push: true
# !!! Fill in with your details
# e.g. LukeMathWalker/zero-to-production
repo: <YOUR USERNAME>/<YOUR REPOSITORY NAME>
# Active probe used by DigitalOcean's to ensure our application is healthy
health_check:
# The path to our health check endpoint!
# It turned out to be useful in the end!
http_path: /health_check
# The port the application will be listening on for incoming requests
# It should match what we specified in our configuration/production.yaml file!
http_port: 8000
# For production workloads we'd go for at least two!
# But let's try to keep the bill under control for now...
instance_count: 1
instance_size_slug: basic-xxs
# All incoming requests should be routed to our app
routes:
- path: /
Take your time to go through all the specified values and understand what they are used for.
We can use their CLI, doctl
, to create the application for the first time:
doctl apps create --spec spec.yaml
Error: Unable to initialize DigitalOcean API client: access token is required.
(hint: run 'doctl auth init')
Well, we have to authenticate first.
Let's follow their suggestion:
doctl auth init
Please authenticate doctl for use with your DigitalOcean account.
You can generate a token in the control panel at
https://cloud.digitalocean.com/account/api/tokens
Once you have provided your token we can try again:
doctl apps create --spec spec.yaml
Error: POST
https://api.digitalocean.com/v2/apps: 400 GitHub user not
authenticated
OK, follow their instructions to link your GitHub account.
Third time's a charm, let's try again!
doctl apps create --spec spec.yaml
Notice: App created
ID Spec Name Default Ingress Active Deployment ID In Progress Deployment ID
e80... zero2prod
It worked!
You can check your app status with
doctl apps list
or by looking at DigitalOcean's dashboard.
Although the app has been successfully created it is not running yet!
Check the Deployment
tab on their dashboard - it is probably building the Docker image.
Looking at a few recent issues on their bug tracker it might take a while - more than a few people have reported they experienced slow builds. Digital Ocean's support engineers suggested to leverage Docker layer caching to mitigate the issue - we already covered all the bases there!
If you experience an out-of-memory error when building your Docker image on DigitalOcean, check out this GitHub issue.
Wait for these lines to show up in their dashboard build logs:
zero2prod | 00:00:20 => Uploaded the built image to the container registry
zero2prod | 00:00:20 => Build complete
Deployed successfully!
You should be able to see the health check logs coming in every ten seconds or so when Digital Ocean's platform pings our application to ensure it is running.
With
doctl apps list
you can retrieve the public facing URI of your application. Something along the lines of
https://zero2prod-aaaaa.ondigitalocean.app
Try firing off a health check request now, it should come back with a 200 OK
!
Notice that DigitalOcean took care for us to set up HTTPS by provisioning a certificate and redirecting HTTPS traffic to the port we specified in the application specification. One less thing to worry about.
The POST
/subscriptions
endpoint is still failing, in the very same way it did locally: we do not have a live database backing our application in our production environment.
Let's provision one.
Add this segment to your spec.yaml
file:
databases:
# PG = Postgres
- engine: PG
# Database name
name: newsletter
# Again, let's keep the bill lean
num_nodes: 1
size: db-s-dev-database
# Postgres version - using the latest here
version: "12"
Then update your app specification:
# You can retrieve your app id using `doctl apps list`
doctl apps update YOUR-APP-ID --spec=spec.yaml
It will take some time for DigitalOcean to provision a Postgres instance.
In the meantime we need to figure out how to point our application at the database in production.
4.3. How To Inject Secrets Using Environment Variables
The connection string will contain values that we do not want to commit to version control - e.g. the username and the password of our database root user.
Our best option is to use environment variables as a way to inject secrets at runtime into the application environment. DigitalOcean's apps, for example, can refer to the DATABASE_URL
environment variable (or a few others for a more granular view) to get the database connection string at runtime.
We need to upgrade our get_configuration
function (again) to fulfill our new requirements.
//! src/configuration.rs
// [...]
pub fn get_configuration() -> Result<Settings, config::ConfigError> {
let base_path = std::env::current_dir().expect("Failed to determine the current directory");
let configuration_directory = base_path.join("configuration");
// Detect the running environment.
// Default to `local` if unspecified.
let environment: Environment = std::env::var("APP_ENVIRONMENT")
.unwrap_or_else(|_| "local".into())
.try_into()
.expect("Failed to parse APP_ENVIRONMENT.");
let environment_filename = format!("{}.yaml", environment.as_str());
let settings = config::Config::builder()
.add_source(config::File::from(configuration_directory.join("base.yaml")))
.add_source(config::File::from(configuration_directory.join(&environment_filename)))
// Add in settings from environment variables (with a prefix of APP and '__' as separator)
// E.g. `APP_APPLICATION__PORT=5001 would set `Settings.application.port`
.add_source(config::Environment::with_prefix("APP").prefix_separator("_").separator("__"))
.build()?;
settings.try_deserialize::<Settings>()
}
This allows us to customize any value in our Settings
struct using environment variables, overriding what is specified in our configuration files.
Why is that convenient?
It makes it possible to inject values that are too dynamic (i.e. not known a priori) or too sensitive to be stored in version control.
It also makes it fast to change the behaviour of our application: we do not have to go through a full re-build if we want to tune one of those values (e.g. the database port). For languages like Rust, where a fresh build can take ten minutes or more, this can make the difference between a short outage and a substantial service degradation with customer-visible impact.
Before we move on let's take care of an annoying detail: environment variables are strings for the config
crate and it will fail to pick up integers if using the standard deserialization routine from serde
.
Luckily enough, we can specify a custom deserialization function.
Let's add a new dependency, serde-aux
(serde
aux
iliary):
#! Cargo.toml
# [...]
[dependencies]
serde-aux = "3"
# [...]
and let's modify both ApplicationSettings
and DatabaseSettings
//! src/configuration.rs
// [...]
use serde_aux::field_attributes::deserialize_number_from_string;
// [...]
#[derive(serde::Deserialize)]
pub struct ApplicationSettings {
#[serde(deserialize_with = "deserialize_number_from_string")]
pub port: u16,
// [...]
}
#[derive(serde::Deserialize)]
pub struct DatabaseSettings {
#[serde(deserialize_with = "deserialize_number_from_string")]
pub port: u16,
// [...]
}
// [...]
4.4. Connecting To Digital Ocean's Postgres Instance
Let's have a look at the connection string of our database using DigitalOcean's dashboard (Components -> Database):
postgresql://newsletter:<PASSWORD>@<HOST>:<PORT>/newsletter?sslmode=require
Our current DatabaseSettings
does not handle SSL mode - it was not relevant for local development, but it is more than desirable to have transport-level encryption for our client/database communication in production.
Before trying to add new functionality, let's make room for it by refactoring DatabaseSettings
.
The current version looks like this:
//! src/configuration.rs
// [...]
#[derive(serde::Deserialize)]
pub struct DatabaseSettings {
pub username: String,
pub password: Secret<String>,
#[serde(deserialize_with = "deserialize_number_from_string")]
pub port: u16,
pub host: String,
pub database_name: String,
}
impl DatabaseSettings {
pub fn connection_string(&self) -> Secret<String> {
// [...]
}
pub fn connection_string_without_db(&self) -> Secret<String> {
// [...]
}
}
We will change its two methods to return a PgConnectOptions
instead of a connection string: it will make it easier to manage all these moving parts.
//! src/configuration.rs
use sqlx::postgres::PgConnectOptions;
// [...]
impl DatabaseSettings {
// Renamed from `connection_string_without_db`
pub fn without_db(&self) -> PgConnectOptions {
PgConnectOptions::new()
.host(&self.host)
.username(&self.username)
.password(&self.password.expose_secret())
.port(self.port)
}
// Renamed from `connection_string`
pub fn with_db(&self) -> PgConnectOptions {
self.without_db().database(&self.database_name)
}
}
We'll also have to update src/main.rs
and tests/health_check.rs
:
//! src/main.rs
// [...]
#[tokio::main]
async fn main() -> std::io::Result<()> {
// [...]
let connection_pool = PgPoolOptions::new()
.acquire_timeout(std::time::Duration::from_secs(2))
// `connect_lazy_with` instead of `connect_lazy`
.connect_lazy_with(configuration.database.with_db());
// [...]
}
//! tests/health_check.rs
// [...]
pub async fn configure_database(config: &DatabaseSettings) -> PgPool {
// Create database
let mut connection = PgConnection::connect_with(&config.without_db())
.await
.expect("Failed to connect to Postgres");
connection
.execute(format!(r#"CREATE DATABASE "{}";"#, config.database_name).as_str())
.await
.expect("Failed to create database.");
// Migrate database
let connection_pool = PgPool::connect_with(config.with_db())
.await
.expect("Failed to connect to Postgres.");
sqlx::migrate!("./migrations")
.run(&connection_pool)
.await
.expect("Failed to migrate the database");
connection_pool
}
Use cargo test
to make sure everything is still working as expected.
Let's now add the require_ssl
property we need to DatabaseSettings
:
//! src/configuration.rs
use sqlx::postgres::PgSslMode;
// [...]
#[derive(serde::Deserialize)]
pub struct DatabaseSettings {
// [...]
// Determine if we demand the connection to be encrypted or not
pub require_ssl: bool,
}
impl DatabaseSettings {
pub fn without_db(&self) -> PgConnectOptions {
let ssl_mode = if self.require_ssl {
PgSslMode::Require
} else {
// Try an encrypted connection, fallback to unencrypted if it fails
PgSslMode::Prefer
};
PgConnectOptions::new()
.host(&self.host)
.username(&self.username)
.password(&self.password.expose_secret())
.port(self.port)
.ssl_mode(ssl_mode)
}
// [...]
}
We want require_ssl
to be false
when we run the application locally (and for our test suite), but true
in our production environment.
Let's amend our configuration files accordingly:
#! configuration/local.yaml
application:
host: 127.0.0.1
database:
# New entry!
require_ssl: false
#! configuration/production.yaml
application:
host: 0.0.0.0
database:
# New entry!
require_ssl: true
We can take the opportunity - now that we are using PgConnectOptions
- to tune sqlx
's instrumentation: lower their logs from INFO
to TRACE
level.
This will eliminate the noise we noticed in the previous chapter.
//! src/configuration.rs
use sqlx::ConnectOptions;
// [...]
impl DatabaseSettings {
// [...]
pub fn with_db(&self) -> PgConnectOptions {
let mut options = self.without_db().database(&self.database_name);
options.log_statements(tracing::log::LevelFilter::Trace);
options
}
}
4.5. Environment Variables In The App Spec
One last step: we need to amend our spec.yaml
manifest to inject the environment variables we need.
#! spec.yaml
name: zero2prod
region: fra
services:
- name: zero2prod
# [...]
envs:
- key: APP_DATABASE__USERNAME
scope: RUN_TIME
value: ${newsletter.USERNAME}
- key: APP_DATABASE__PASSWORD
scope: RUN_TIME
value: ${newsletter.PASSWORD}
- key: APP_DATABASE__HOST
scope: RUN_TIME
value: ${newsletter.HOSTNAME}
- key: APP_DATABASE__PORT
scope: RUN_TIME
value: ${newsletter.PORT}
- key: APP_DATABASE__DATABASE_NAME
scope: RUN_TIME
value: ${newsletter.DATABASE}
databases:
- name: newsletter
# [...]
The scope is set to RUN_TIME
to distinguish between environment variables needed during our Docker build process and those needed when the Docker image is launched.
We are populating the values of the environment variables by interpolating what is exposed by the Digital Ocean's platform (e.g. ${newsletter.PORT}
) - refer to their documentation for more details.
4.6. One Last Push
Let's apply the new spec
# You can retrieve your app id using `doctl apps list`
doctl apps update YOUR-APP-ID --spec=spec.yaml
and push our change up to GitHub to trigger a new deployment.
We now need to migrate the database6:
DATABASE_URL=YOUR-DIGITAL-OCEAN-DB-CONNECTION-STRING sqlx migrate run
We are ready to go!
Let's fire off a POST
request to /subscriptions
:
curl --request POST \
--data 'name=le%20guin&email=ursula_le_guin%40gmail.com' \
https://zero2prod-adqrw.ondigitalocean.app/subscriptions \
--verbose
The server should respond with a 200 OK
.
Congrats, you have just deployed your first Rust application!
And Ursula Le Guin just subscribed to your email newsletter (allegedly)!
If you have come this far, I'd love to get a screenshot of your Digital Ocean's dashboard showing off that running application!
Email it over at [email protected]
or share it on Twitter tagging the Zero To Production In Rust account, @zero2prod.
5. Next On Zero To Production
It was quite a journey (five chapters!) but we finally got our application live, running on a remote server.
In the next chapter we will take a step back and look again at our core functionality: we will try to work out how to get our newsletter API to send emails!
As always, all the code we wrote in this chapter can be found on GitHub - toss a star to your witcher, o' valley of plenty!
See you next time!
This article is a sample from Zero To Production In Rust, a hands-on introduction to backend development in Rust.
You can get a copy of the book at zero2prod.com.
Footnotes
Click to expand!
Kubernetes is six years old, Docker itself is just seven years old!
Unless you are using --network=host
, --ssh
or other similar options. You also have volumes as an alternative mechanism to share files at runtime.
rustc
statically links all Rust code but dynamically links libc
from the underlying system if you are using the Rust standard library. You can get a fully statically linked binary by targeting linux-musl
, see here.
Credits to Ian Purton and flat_of_angles for pointing out that there was further room for improvement.
Full disclosure - I am the author of cargo-chef
.
You will have to temporarily disable Trusted Sources
to run the migrations from your local machine.
Book - Table Of Contents
Click to expand!
The Table of Contents is provisional and might change over time. The draft below is the most accurate picture at this point in time.
- Getting Started
- Installing The Rust Toolchain
- Project Setup
- IDEs
- Continuous Integration
- Our Driving Example
- What Should Our Newsletter Do?
- Working In Iterations
- Sign Up A New Subscriber
- Telemetry
- Unknown Unknowns
- Observability
- Logging
- Instrumenting /POST subscriptions
- Structured Logging
- Go Live
- We Must Talk About Deployments
- Choosing Our Tools
- A Dockerfile For Our Application
- Deploy To DigitalOcean Apps Platform
- Rejecting Invalid Subscribers #1
- Requirements
- First Implementation
- Validation Is A Leaky Cauldron
- Type-Driven Development
- Ownership Meets Invariants
- Panics
- Error As Values -
Result
- Reject Invalid Subscribers #2
- Error Handling
- What Is The Purpose Of Errors?
- Error Reporting For Operators
- Errors For Control Flow
- Avoid "Ball Of Mud" Error Enums
- Who Should Log Errors?
- Naive Newsletter Delivery
- User Stories Are Not Set In Stone
- Do Not Spam Unconfirmed Subscribers
- All Confirmed Subscribers Receive New Issues
- Implementation Strategy
- Body Schema
- Fetch Confirmed Subscribers List
- Send Newsletter Emails
- Validation Of Stored Data
- Limitations Of The Naive Approach
- Securing Our API
- Fault-tolerant Newsletter Delivery