Skeleton And Principles For A Maintainable Test Suite
- 4097 words
- 21 min
This article is a sample from Zero To Production In Rust, a hands-on introduction to backend development in Rust.
You can get a copy of the book at zero2prod.com.
TL;DR
We have used a test-driven approach to write all new pieces of functionality throughout the book.
While this strategy has served us well, we have not invested a lot of time into refactoring our test code. As a result, our tests
folder is a bit of mess at this point.
Before moving forward, we will restructure our integration test suite to support us as our application grows in complexity and the number of tests increases.
Chapter 7 - Part 1
- Why Do We Write Tests?
- Why Don't We Write Tests?
- Test Code Is Still Code
- Our Test Suite
- Test Discovery
- One Test File, One Crate
- Sharing Test Helpers
- Sharing Startup Logic
- Build An API Client
- Summary
Why Do We Write Tests?
Is writing tests a good use of developers' time?
A good test suite is, first and foremost, a risk-mitigation measure.
Automated tests reduce the risk associated with changes to an existing codebase - most regressions and bugs are caught in the continuous integration pipeline and never reach users. The team is therefore empowered to iterate faster and release more often.
Tests act as documentation as well.
The test suite is often the best starting point when deep-diving in an unknown code base - it shows you how the code is supposed to behave and what scenarios are considered relevant enough to have dedicated tests for.
"Write a test suite!" should definitely be on your to-do list if you want to make your project more welcoming to new contributors.
There are other positive side-effects often associated with good tests - modularity, decoupling. These are harder to quantify, as we have yet to agree as an industry on what "good code" looks like.
Why Don't We Write Tests?
Although there are compelling reasons to invest time and effort in writing a good test suite, reality is somewhat messier.
First, the development community did not always believe in the value of testing.
We can find examples of test-driven development throughout the history of the discipline, but it is only with the "Extreme Programming" (XP) book that the practice entered the mainstream debate - in 1999!
Paradigm shifts do not happen overnight - it took years for the test-driven approach to gain traction as a "best practice" within the industry.
If test-driven development has won the minds and hearts of developers, the battle with management is often still undergoing.
Good tests build technical leverage, but writing tests takes time. When a deadline is pressing, testing is often the first to be sacrificed.
As a consequence, most of the material you find around is either an introduction to testing or a guide on how to pitch its value to stakeholders.
There is very little about testing at scale - what happens if you stick to the book and keep writing tests as the codebase grows to tens of thousands of lines, with hundreds of test cases?
Test Code Is Still Code
All test suites start in the same way: an empty file, a world of possibilities.
You go in, you add the first test. Easy, done.
Then the second. Boom.
The third. You just had to copy a few lines from the first, all good.
The fourth...
After a while, test coverage starts to go down: new code is less thoroughly tested than the code you wrote at the very beginning of the project. Have you started to doubt the value of tests?
Absolutely not, tests are great!
Yet, you are writing fewer tests as the project moves forward.
It's because of friction - it got progressively more cumbersome to write new tests as the codebase evolved.
Test code is still code.
It has to be modular, well-structured, sufficiently documented. It requires maintenance.
If we do not actively invest in the health of our test suite, it will rot over time.
Coverage goes down and soon enough we will find critical paths in our application code that are never exercised by automated tests.
You need to regularly step back to take a look at your test suite as a whole.
Time to look at ours, isn't it?
Our Test Suite
All our integration tests live within a single file, tests/health_check.rs
:
//! tests/health_check.rs
// [...]
// Ensure that the `tracing` stack is only initialised once using `once_cell`
static TRACING: Lazy<()> = Lazy::new(|| {
// [...]
});
pub struct TestApp {
pub address: String,
pub db_pool: PgPool,
}
async fn spawn_app() -> TestApp {
// [...]
}
pub async fn configure_database(config: &DatabaseSettings) -> PgPool {
// [...]
}
#[tokio::test]
async fn health_check_works() {
// [...]
}
#[tokio::test]
async fn subscribe_returns_a_200_for_valid_form_data() {
// [...]
}
#[tokio::test]
async fn subscribe_returns_a_400_when_data_is_missing() {
// [...]
}
#[tokio::test]
async fn subscribe_returns_a_400_when_fields_are_present_but_invalid() {
// [...]
}
Test Discovery
There is only one test dealing with our health check endpoint - health_check_works
.
The other three tests are probing our POST /subscriptions
endpoint while the rest of the code deals with shared setup steps (spawn_app
, TestApp
, configure_database
, TRACING
).
Why have we shoved everything in tests/health_check.rs
?
Because it was convenient!
The setup functions were already there - it was easier to add another test case within the same file than figuring out how to share that code properly across multiple test modules.
Our main goal in this refactoring is discoverability:
- given an application endpoint, it should be easy to find the corresponding integration tests within the
tests
folder; - when writing a test, it should be easy to find the relevant test helper functions.
We will focus on folder structure, but that is definitely not the only tool available when it comes to test discovery.
Test coverage tools can often tell you which tests triggered the execution of a certain line of application code.
You can rely on techniques such as coverage marks to create an obvious link between test and application code.
As always, a multi-pronged approach is likely to give you the best results as the complexity of your test suite increases.
One Test File, One Crate
Before we start moving things around, let's nail down a few facts about integration testing in Rust.
The tests
folder is somewhat special - cargo
knows to look into it searching for integration tests.
Each file within the tests
folder gets compiled as its own crate.
We can check this out by running cargo build --tests
and then looking under target/debug/deps
:
# Build test code, without running tests
cargo build --tests
# Find all files with a name starting with `health_check`
ls target/debug/deps | grep health_check
health_check-fc23645bf877da35
health_check-fc23645bf877da35.d
The trailing hashes will likely be different on your machine, but there should be two entries starting with health_check-*
.
What happens if you try to run it?
./target/debug/deps/health_check-fc23645bf877da35
running 4 tests
test health_check_works ... ok
test subscribe_returns_a_400_when_fields_are_present_but_invalid ... ok
test subscribe_returns_a_400_when_data_is_missing ... ok
test subscribe_returns_a_200_for_valid_form_data ... ok
test result: ok. 4 passed; finished in 0.44s
That's right, it runs our integration tests!
If we had five *.rs
files under tests
, we'd find five executables in target/debug/deps
.
Sharing Test Helpers
If each integration test file is its own executable, how do we share test helpers functions?
The first option is to define a stand-alone module - e.g. tests/helpers/mod.rs
1.
You can add common functions in mod.rs
(or define other sub-modules in there) and then refer to helpers
in your test file (e.g. tests/health_check.rs
) with:
//! tests/health_check.rs
// [...]
mod helpers;
// [...]
helpers
is bundled in the health_check
test executable as a sub-module and we get access to the functions it exposes in our test cases.
This approach works fairly well to start out, but it leads to annoying function is never used
warnings down the line.
The issue is that helpers
is bundled as a sub-module, it is not invoked as a third-party crate: cargo
compiles each test executable in isolation and warns us if, for a specific test file, one or more public functions in helpers
have never been invoked. This is bound to happen as your test suite grows - not all test files will use all your helper methods.
The second option takes full advantage of that each file under tests
is its own executable - we can create sub-modules scoped to a single test executable!
Let's create an api
folder under tests
, with a single main.rs
file inside:
tests/
api/
main.rs
health_check.rs
First, we gain clarity: we are structuring api
in the very same way we would structure a binary crate. Less magic - it builds on the same knowledge of the module system you built while working on application code.
If you run cargo build --tests
you should be able to spot
Running target/debug/deps/api-0a1bfb817843fdcf
running 0 tests
test result: ok. 0 passed; finished in 0.00s
in the output - cargo
compiled api
as a test executable, looking for tests cases.
There is no need to define a main
function in main.rs
- the Rust test framework adds one for us behind the scenes2.
We can now add sub-modules in main.rs
:
//! tests/api/main.rs
mod helpers;
mod health_check;
mod subscriptions;
Add three empty files - tests/api/helpers.rs
, tests/api/health_check.rs
and tests/api/subscriptions.rs
.
Time to delete tests/health_check.rs
and re-distribute its content:
//! tests/api/helpers.rs
use sqlx::{Connection, Executor, PgConnection, PgPool};
use std::net::TcpListener;
use uuid::Uuid;
use zero2prod::configuration::{get_configuration, DatabaseSettings};
use zero2prod::email_client::EmailClient;
use zero2prod::startup::run;
use zero2prod::telemetry::{get_subscriber, init_subscriber};
// Ensure that the `tracing` stack is only initialised once using `once_cell`
static TRACING: Lazy<()> = Lazy::new(|| {
// [...]
});
pub struct TestApp {
// [...]
}
// Public!
pub async fn spawn_app() -> TestApp {
// [...]
}
// Not public anymore!
async fn configure_database(config: &DatabaseSettings) -> PgPool {
// [...]
}
//! tests/api/health_check.rs
use crate::helpers::spawn_app;
#[tokio::test]
async fn health_check_works() {
// [...]
}
//! tests/api/subscriptions.rs
use crate::helpers::spawn_app;
#[tokio::test]
async fn subscribe_returns_a_200_for_valid_form_data() {
// [...]
}
#[tokio::test]
async fn subscribe_returns_a_400_when_data_is_missing() {
// [...]
}
#[tokio::test]
async fn subscribe_returns_a_400_when_fields_are_present_but_invalid() {
// [...]
}
cargo test
should succeed, with no warnings.
Congrats, you have broken down your test suite in smaller and more manageable modules!
There are a few positive side-effects to the new structure:
- it is recursive.
Iftests/api/subscriptions.rs
grows too unwieldy, we can turn it into a module, withtests/api/subscriptions/helpers.rs
holding subscription-specific test helpers and one or more test files focused on a specific flow or concern; - the implementation details of our helpers function are encapsulated.
It turns out that our tests only need to know aboutspawn_app
andTestApp
- no need to exposeconfigure_database
orTRACING
, we can keep that complexity hidden away in thehelpers
module; - we have a single test binary.
If you have large test suite with a flat file structure, you'll soon be building tens of executable every time you runcargo test
. While each executable is compiled in parallel, the linking phase is instead entirely sequential! Bundling all your test cases in a single executable reduces the time spent compiling your test suite in CI3.
If you are running Linux, you might see errors like
thread 'actix-rt:worker' panicked at
'Can not create Runtime: Os { code: 24, kind: Other, message: "Too many open files" }',
when you run cargo test
after the refactoring.
This is due to a limit enforced by the operating system on the maximum number of open file descriptors (including sockets) for each process - given that we are now running all tests as part of a single binary, we might be exceeding it. The limit is usually set to 1024, but you can raise it with ulimit -n X
(e.g. ulimit -n 10000
) to resolve the issue.
Sharing Startup Logic
Now that we have reworked the layout of our test suite it's time to zoom in on the test logic itself.
We will start with spawn_app
:
//! tests/api/helpers.rs
// [...]
pub struct TestApp {
pub address: String,
pub db_pool: PgPool,
}
pub async fn spawn_app() -> TestApp {
Lazy::force(&TRACING);
let listener = TcpListener::bind("127.0.0.1:0").expect("Failed to bind random port");
let port = listener.local_addr().unwrap().port();
let address = format!("http://127.0.0.1:{}", port);
let mut configuration = get_configuration().expect("Failed to read configuration.");
configuration.database.database_name = Uuid::new_v4().to_string();
let connection_pool = configure_database(&configuration.database).await;
let sender_email = configuration
.email_client
.sender()
.expect("Invalid sender email address.");
let email_client = EmailClient::new(
configuration.email_client.base_url,
sender_email,
configuration.email_client.authorization_token,
);
let server = run(listener, connection_pool.clone(), email_client)
.expect("Failed to bind address");
let _ = tokio::spawn(server);
TestApp {
address,
db_pool: connection_pool,
}
}
// [...]
Most of the code we have here is extremely similar to what we find in our main
entrypoint:
//! src/main.rs
use sqlx::postgres::PgPoolOptions;
use std::net::TcpListener;
use zero2prod::configuration::get_configuration;
use zero2prod::email_client::EmailClient;
use zero2prod::startup::run;
use zero2prod::telemetry::{get_subscriber, init_subscriber};
#[tokio::main]
async fn main() -> std::io::Result<()> {
let subscriber = get_subscriber("zero2prod".into(), "info".into(), std::io::stdout);
init_subscriber(subscriber);
let configuration = get_configuration().expect("Failed to read configuration.");
let connection_pool = PgPoolOptions::new()
.acquire_timeout(std::time::Duration::from_secs(2))
.connect_lazy_with(configuration.database.with_db());
let sender_email = configuration
.email_client
.sender()
.expect("Invalid sender email address.");
let email_client = EmailClient::new(
configuration.email_client.base_url,
sender_email,
configuration.email_client.authorization_token,
);
let address = format!(
"{}:{}",
configuration.application.host, configuration.application.port
);
let listener = TcpListener::bind(address)?;
run(listener, connection_pool, email_client)?.await?;
Ok(())
}
Every time we add a dependency or modify the server constructor, we have at least two places to modify - we have recently gone through the motions with EmailClient
. It's mildly annoying.
More importantly though, the startup logic in our application code is never tested.
As the codebase evolves, they might start to diverge subtly, leading to different behaviour in our tests compared to our production environment.
We will first extract the logic out of main
and then figure out what hooks we need to leverage the same code paths in our test code.
Extracting Our Startup Code
From a structural perspective, our startup logic is a function taking Settings
as input and returning an instance of our application as output.
It follows that our main
function should look like this:
//! src/main.rs
use zero2prod::configuration::get_configuration;
use zero2prod::startup::build;
use zero2prod::telemetry::{get_subscriber, init_subscriber};
#[tokio::main]
async fn main() -> std::io::Result<()> {
let subscriber = get_subscriber("zero2prod".into(), "info".into(), std::io::stdout);
init_subscriber(subscriber);
let configuration = get_configuration().expect("Failed to read configuration.");
let server = build(configuration).await?;
server.await?;
Ok(())
}
We first perform some binary-specific logic (i.e. telemetry initialisation), then we build a set of configuration values from the supported sources (files + environment variables) and use it to spin up an application. Linear.
Let's define that build
function then:
//! src/startup.rs
// [...]
// New imports!
use crate::configuration::Settings;
use sqlx::postgres::PgPoolOptions;
pub async fn build(configuration: Settings) -> Result<Server, std::io::Error> {
let connection_pool = PgPoolOptions::new()
.acquire_timeout(std::time::Duration::from_secs(2))
.connect_lazy_with(configuration.database.with_db());
let sender_email = configuration
.email_client
.sender()
.expect("Invalid sender email address.");
let email_client = EmailClient::new(
configuration.email_client.base_url,
sender_email,
configuration.email_client.authorization_token,
);
let address = format!(
"{}:{}",
configuration.application.host, configuration.application.port
);
let listener = TcpListener::bind(address)?;
run(listener, connection_pool, email_client)
}
pub fn run(
listener: TcpListener,
db_pool: PgPool,
email_client: EmailClient,
) -> Result<Server, std::io::Error> {
// [...]
}
Nothing too surprising - we have just moved around the code that was previously living in main
.
Let's make it test-friendly now!
Testing Hooks In Our Startup Logic
Let's look at our spawn_app
function again:
//! tests/api/helpers.rs
// [...]
use zero2prod::startup::build;
// [...]
pub async fn spawn_app() -> TestApp {
// The first time `initialize` is invoked the code in `TRACING` is executed.
// All other invocations will instead skip execution.
Lazy::force(&TRACING);
let listener = TcpListener::bind("127.0.0.1:0").expect("Failed to bind random port");
// We retrieve the port assigned to us by the OS
let port = listener.local_addr().unwrap().port();
let address = format!("http://127.0.0.1:{}", port);
let mut configuration = get_configuration().expect("Failed to read configuration.");
configuration.database.database_name = Uuid::new_v4().to_string();
let connection_pool = configure_database(&configuration.database).await;
let sender_email = configuration
.email_client
.sender()
.expect("Invalid sender email address.");
let email_client = EmailClient::new(
configuration.email_client.base_url,
sender_email,
configuration.email_client.authorization_token,
);
let server = run(listener, connection_pool.clone(), email_client)
.expect("Failed to bind address");
let _ = tokio::spawn(server);
TestApp {
address,
db_pool: connection_pool,
}
}
// [...]
At a high-level, we have the following phases:
- Execute test-specific setup (i.e. initialise a
tracing
subscriber); - Randomise the configuration to ensure tests do not interfere with each other (i.e. a different logical database for each test case);
- Initialise external resources (e.g. create and migrate the database!);
- Build the application;
- Launch the application as a background task and return a set of resources to interact with it.
Can we just throw build
in there and call it a day?
Not really, but let's try to see where it falls short:
//! tests/api/helpers.rs
// [...]
// New import!
use zero2prod::startup::build;
pub async fn spawn_app() -> TestApp {
Lazy::force(&TRACING);
// Randomise configuration to ensure test isolation
let configuration = {
let mut c = get_configuration().expect("Failed to read configuration.");
// Use a different database for each test case
c.database.database_name = Uuid::new_v4().to_string();
// Use a random OS port
c.application.port = 0;
c
};
// Create and migrate the database
configure_database(&configuration.database).await;
// Launch the application as a background task
let server = build(configuration).await.expect("Failed to build application.");
let _ = tokio::spawn(server);
TestApp {
// How do we get these?
address: todo!(),
db_pool: todo!()
}
}
// [...]
It almost works - the approach falls short at the very end: we have no way to retrieve the random address assigned by the OS to the application and we don't really know how to build a connection pool to the database, needed to perform assertions on side-effects impacting the persisted state.
Let's deal with the connection pool first: we can extract the initialisation logic from build
into a stand-alone function and invoke it twice.
//! src/startup.rs
// [...]
use crate::configuration::DatabaseSettings;
// We are taking a reference now!
pub async fn build(configuration: &Settings) -> Result<Server, std::io::Error> {
let connection_pool = get_connection_pool(&configuration.database);
// [...]
}
pub fn get_connection_pool(
configuration: &DatabaseSettings
) -> PgPool {
PgPoolOptions::new()
.acquire_timeout(std::time::Duration::from_secs(2))
.connect_lazy_with(configuration.with_db())
}
//! tests/api/helpers.rs
// [...]
use zero2prod::startup::{build, get_connection_pool};
// [...]
pub async fn spawn_app() -> TestApp {
// Notice the .clone!
let server = build(configuration.clone())
.await
.expect("Failed to build application.");
// [...]
TestApp {
address: todo!(),
db_pool: get_connection_pool(&configuration.database),
}
}
// [...]
You'll have to add a #[derive(Clone)]
to all the structs in src/configuration.rs
to make the compiler happy, but we are done with the database connection pool.
How do we get the application address instead?
actix_web::dev::Server
, the type returned by build
, does not allow us to retrieve the application port.
We need to do a bit more legwork in our application code - we will wrap actix_web::dev::Server
in a new type that holds on to the information we want.
//! src/startup.rs
// [...]
// A new type to hold the newly built server and its port
pub struct Application {
port: u16,
server: Server,
}
impl Application {
// We have converted the `build` function into a constructor for
// `Application`.
pub async fn build(configuration: Settings) -> Result<Self, std::io::Error> {
let connection_pool = get_connection_pool(&configuration.database);
let sender_email = configuration
.email_client
.sender()
.expect("Invalid sender email address.");
let email_client = EmailClient::new(
configuration.email_client.base_url,
sender_email,
configuration.email_client.authorization_token,
);
let address = format!(
"{}:{}",
configuration.application.host, configuration.application.port
);
let listener = TcpListener::bind(&address)?;
let port = listener.local_addr().unwrap().port();
let server = run(listener, connection_pool, email_client)?;
// We "save" the bound port in one of `Application`'s fields
Ok(Self { port, server })
}
pub fn port(&self) -> u16 {
self.port
}
// A more expressive name that makes it clear that
// this function only returns when the application is stopped.
pub async fn run_until_stopped(self) -> Result<(), std::io::Error> {
self.server.await
}
}
// [...]
//! tests/api/helpers.rs
// [...]
// New import!
use zero2prod::startup::Application;
pub async fn spawn_app() -> TestApp {
// [...]
let application = Application::build(configuration.clone())
.await
.expect("Failed to build application.");
// Get the port before spawning the application
let address = format!("http://127.0.0.1:{}", application.port());
let _ = tokio::spawn(application.run_until_stopped());
TestApp {
address,
db_pool: get_connection_pool(&configuration.database),
}
}
// [...]
//! src/main.rs
// [...]
// New import!
use zero2prod::startup::Application;
#[tokio::main]
async fn main() -> std::io::Result<()> {
// [...]
let application = Application::build(configuration).await?;
application.run_until_stopped().await?;
Ok(())
}
It's done - run cargo test
if you want to double-check!
Build An API Client
All of our integration tests are black-box: we launch our application at the beginning of each test and interact with it using an HTTP client (i.e. reqwest
).
As we write tests, we necessarily end up implementing a client for our API.
That's great!
It gives us a prime opportunity to see what it feels like to interact with the API as a user.
We just need to be careful not to spread the client logic all over the test suite - when the API changes, we don't want to go through tens of tests to remove a trailing s
from the path of an endpoint.
Let's look at our subscriptions tests:
//! tests/api/subscriptions.rs
use crate::helpers::spawn_app;
#[tokio::test]
async fn subscribe_returns_a_200_for_valid_form_data() {
// Arrange
let app = spawn_app().await;
let client = reqwest::Client::new();
Mock::given(path("/email"))
.and(method("POST"))
.respond_with(ResponseTemplate::new(200))
.mount(&app.email_server)
.await;
// Act
let body = "name=le%20guin&email=ursula_le_guin%40gmail.com";
let response = client
.post(&format!("{}/subscriptions", &app.address))
.header("Content-Type", "application/x-www-form-urlencoded")
.body(body)
.send()
.await
.expect("Failed to execute request.");
// Assert
assert_eq!(200, response.status().as_u16());
let saved = sqlx::query!("SELECT email, name FROM subscriptions",)
.fetch_one(&app.db_pool)
.await
.expect("Failed to fetch saved subscription.");
assert_eq!(saved.email, "[email protected]");
assert_eq!(saved.name, "le guin");
}
#[tokio::test]
async fn subscribe_returns_a_400_when_data_is_missing() {
// Arrange
let app = spawn_app().await;
let client = reqwest::Client::new();
let test_cases = vec![
("name=le%20guin", "missing the email"),
("email=ursula_le_guin%40gmail.com", "missing the name"),
("", "missing both name and email"),
];
for (invalid_body, error_message) in test_cases {
// Act
let response = client
.post(&format!("{}/subscriptions", &app.address))
.header("Content-Type", "application/x-www-form-urlencoded")
.body(invalid_body)
.send()
.await
.expect("Failed to execute request.");
// Assert
assert_eq!(
400,
response.status().as_u16(),
// Additional customised error message on test failure
"The API did not fail with 400 Bad Request when the payload was {}.",
error_message
);
}
}
#[tokio::test]
async fn subscribe_returns_a_400_when_fields_are_present_but_invalid() {
// Arrange
let app = spawn_app().await;
let client = reqwest::Client::new();
let test_cases = vec![
("name=&email=ursula_le_guin%40gmail.com", "empty name"),
("name=Ursula&email=", "empty email"),
("name=Ursula&email=definitely-not-an-email", "invalid email"),
];
for (body, description) in test_cases {
// Act
let response = client
.post(&format!("{}/subscriptions", &app.address))
.header("Content-Type", "application/x-www-form-urlencoded")
.body(body)
.send()
.await
.expect("Failed to execute request.");
// Assert
assert_eq!(
400,
response.status().as_u16(),
"The API did not return a 400 Bad Request when the payload was {}.",
description
);
}
}
We have the same calling code in each test - we should pull it out and add a helper method to our TestApp
struct:
//! tests/api/helpers.rs
// [...]
pub struct TestApp {
// [...]
}
impl TestApp {
pub async fn post_subscriptions(&self, body: String) -> reqwest::Response {
reqwest::Client::new()
.post(&format!("{}/subscriptions", &self.address))
.header("Content-Type", "application/x-www-form-urlencoded")
.body(body)
.send()
.await
.expect("Failed to execute request.")
}
}
// [...]
//! tests/api/subscriptions.rs
use crate::helpers::spawn_app;
#[tokio::test]
async fn subscribe_returns_a_200_for_valid_form_data() {
// [...]
// Act
let response = app.post_subscriptions(body.into()).await;
// [...]
}
#[tokio::test]
async fn subscribe_returns_a_400_when_data_is_missing() {
// [...]
for (invalid_body, error_message) in test_cases {
let response = app.post_subscriptions(invalid_body.into()).await;
// [...]
}
}
#[tokio::test]
async fn subscribe_returns_a_400_when_fields_are_present_but_invalid() {
// [...]
for (body, description) in test_cases {
let response = app.post_subscriptions(body.into()).await;
// [...]
}
}
We could add another method for the health check endpoint, but it's only used once - there is no need right now.
Summary
We started with a single file test suite, we finished with a modular test suite and a robust set of helpers.
Just like application code, test code is never finished: we will have to keep working on it as the project evolves, but we have laid down solid foundations to keep moving forward without losing momentum.
We are now ready to tackle the remaining pieces of functionality needed to dispatch a confirmation email.
As always, all the code we wrote in this chapter can be found on GitHub.
See you next time!
This article is a sample from Zero To Production In Rust, a hands-on introduction to backend development in Rust.
You can get a copy of the book at zero2prod.com.
Footnotes
Click to expand!
Refer to the test organization chapter in the Rust book for more details.
You can actually override the default test framework and plug your own. Look at libtest-mimic as an example!
See this article as an example with some numbers (1.9x speedup!). You should always benchmark the approach on your specific codebase before committing.
Book - Table Of Contents
Click to expand!
The Table of Contents is provisional and might change over time. The draft below is the most accurate picture at this point in time.
- Getting Started
- Installing The Rust Toolchain
- Project Setup
- IDEs
- Continuous Integration
- Our Driving Example
- What Should Our Newsletter Do?
- Working In Iterations
- Sign Up A New Subscriber
- Telemetry
- Unknown Unknowns
- Observability
- Logging
- Instrumenting /POST subscriptions
- Structured Logging
- Go Live
- We Must Talk About Deployments
- Choosing Our Tools
- A Dockerfile For Our Application
- Deploy To DigitalOcean Apps Platform
- Rejecting Invalid Subscribers #1
- Requirements
- First Implementation
- Validation Is A Leaky Cauldron
- Type-Driven Development
- Ownership Meets Invariants
- Panics
- Error As Values -
Result
- Reject Invalid Subscribers #2
- Error Handling
- What Is The Purpose Of Errors?
- Error Reporting For Operators
- Errors For Control Flow
- Avoid "Ball Of Mud" Error Enums
- Who Should Log Errors?
- Naive Newsletter Delivery
- User Stories Are Not Set In Stone
- Do Not Spam Unconfirmed Subscribers
- All Confirmed Subscribers Receive New Issues
- Implementation Strategy
- Body Schema
- Fetch Confirmed Subscribers List
- Send Newsletter Emails
- Validation Of Stored Data
- Limitations Of The Naive Approach
- Securing Our API
- Fault-tolerant Newsletter Delivery