How we cut our Django test runtime by 50% without compromising reliability

At Inkle Engineering, testing is the backbone of our workflow. It keeps every release predictable and reduces surprises as the product grows. Over time, we have adopted a set of habits that guide how we write and maintain tests:

  • Start with at least one happy-path test to confirm the core behaviour works.
  • Add tests for edge cases before they create production issues.
  • When a bug appears, add a test that prevents it from returning.
  • Prefer integration tests over isolated unit tests because they reflect real behaviour.
  • Run tests against the same database version as production to avoid inconsistencies.
  • Continue improving our approach as the team and codebase evolve.

These principles have helped maintain quality, but they did not protect us from a growing problem: slow test runs.

The Setup

Our test environment uses Docker Compose to spin up Redis and PostgreSQL. Everything runs against a dedicated Django test settings file. The setup has been reliable, but as the codebase grew, CI times increased significantly. At one point, a full run took more than 20 minutes.

A typical run included the following steps:

  1. Start Docker services
  2. Install dependencies
  3. Apply all migrations
  4. Start the Django application
  5. Run the test suite
  6. Generate coverage
  7. Check coverage thresholds

Each step was reasonable, but together they created unnecessary delays.

Optimization 1: Save and Restore the Migration State

One clear bottleneck was the migration step, which consistently added around 100 seconds to every test run.

We introduced a simple improvement:

  • Run all migrations once
  • Save a database dump
  • Commit the dump to the repository
  • Restore this dump before each test run
  • Run migrate again, which now completes almost instantly

This reduced each test run by about 100 seconds. We update the migration dump every few months.

Optimization 2: Parallelizing the Test Suite

The larger improvement came from parallelizing our GitHub Actions workflow. The goal was to divide the test suite into multiple jobs that could run at the same time.

Each parallel job receives:

  • Its own environment
  • The same pre-migrated database snapshot
  • A specific subset of Django apps to test

To make this work, we reorganized our workflow into two phases.

Phase 1: Database Setup Job

A single job prepares the shared database state:

  • Start PostgreSQL using Docker Compose
  • Restore the pre-saved migration dump
  • Apply any pending migrations
  • Create a fresh database snapshot
  • Upload the snapshot as a GitHub Actions artifact

All test jobs will use this snapshot.

Phase 2: Parallel Test Execution

Multiple jobs run in parallel. Each job:

  1. Downloads the pre-migrated database snapshot
  2. Restores it into its own PostgreSQL container
  3. Runs tests for a defined set of Django apps

This structure prevents any single job from becoming a bottleneck. It also avoids recreating the database from scratch for every run.

Managing Coverage Across Parallel Jobs

Parallel execution introduces a new challenge: multiple coverage files. We solved this in three steps.

Step 1: Preserve Coverage Files

Each test job uploads its coverage file as an artifact with a unique name.

Step 2: Aggregate in a Final Job

A dedicated coverage job waits for all test jobs to finish and downloads their artifacts.

Step 3: Combine the Results

We use the Coverage.py combine command to merge all coverage files into a single report. The tool merges execution data line by line, producing a complete and accurate result.

This approach gives us faster CI runs, reliable coverage metrics, and an easy path to scale as our test suite grows.

We will share the exact configuration and Django-specific settings in a follow-up post.


If solving problems like this sounds fun, we’d love to meet you. Take a look at our open roles on the Inkle careers page.