BuildPulse

Detect and track flaky pytest tests

Almost every engineering team eventually experiences flaky tests. Martin Fowler sums up the problem nicely:

[Flaky tests] are a virulent infection that can completely ruin your entire test suite. As a result they need to be dealt with as soon as you can, before your entire deployment pipeline is compromised.

If you're experiencing flaky pytest tests, BuildPulse can help. BuildPulse is a service that helps you detect and track flaky tests to save your engineering team time.

How can I detect flaky pytest tests?

Identifying flaky pytest tests requires two steps: generating JUnit XML reports for your test results and then analyzing those reports either on your own or automatically with BuildPulse. Below we’ll walk you through how to do this.

Generate JUnit XML reports for test results

JUnit XML serves as a standard format for integration between tools that deal with test results. BuildPulse reads test results using this format, and you can use it for your analysis as well.

In the steps below, we'll start with a Python project that has an existing pytest test suite, and we'll add JUnit XML as an additional output format for the test suite. In each step, we'll show the Git diff for the change that we're making.

  1. Configure pytest to use the xunit2 format.

    Create a pytest.ini file (or update the existing file if your project already has one) and set the junit_family as shown below.

    @@ -0,0 +1,2 @@
    +[pytest]
    +junit_family=xunit2

    If you're using pytest 6.1 or later, xunit2 is already the default. As long as you're not currently overriding the default, you can skip this step for pytest 6.1 or later.

  2. Update your CI workflow to output a JUnit XML file describing the test results.

    Below we add the --junitxml option when running pytest, and we tell it to write the report to a file named junit.xml at the root of the project.

    @@ -23,4 +23,4 @@ jobs:
     
         - name: Test with pytest
           run: |
    -        pipenv run pytest
    +        pipenv run pytest --junitxml=junit.xml

    This example project uses Github Actions for CI, so we're updating our CI script in the .github/workflows/ directory. If you're using a different CI service, apply this change wherever your CI script is defined (e.g., .circleci/config.yml for CircleCI, etc.).

  3. Commit these changes to your repository.

    git commit -am "Update CI to generate JUnit XML for test results"
The final result of these changes should resemble commit 830a749 in the buildpulse-example-pytest repository.

Analyze results to identify flaky tests

Now that your CI builds are generating JUnit XML reports, you can use those reports to find and track flaky tests as described below. Or, if you'd rather get this working in just a few minutes, you can automate everything with BuildPulse.

  1. Save the data to a central location

    At the end of each CI build, save the following data to a central location (such as S3, etc.) for later processing:

    • JUnit XML files
    • Unique fingerprint for the code (such as the Git SHA)
    • URL for the CI build
  2. Analyze the data to find flaky tests

    For builds that have the same unique code fingerprint (i.e., each of the builds that ran against the same exact code), parse the JUnit XML files to find any tests that passed in one build and failed in another build. When a test produces different results for the same code, we identify the test as flaky.

  3. Keep track of the flaky tests you found

    For each of the flaky tests identified in the previous step, store the key information about the flaky results. This information will show which tests are causing the most problems, and it will offer useful clues when investigating potential ways to fix the flakiness:

    • Test name
    • Details for at least one passing result, including the timestamp and the build URL
    • Details for each failure, including the timestamp, failure message, and the build URL

    You can store this information anywhere that allows your team to track it over time and sort it to find the most frequent flaky tests (e.g., a database, a spreadsheet, an issue tracker). Or you can let BuildPulse handle all of the analysis and tracking for you.

Automatically detect and track flaky tests with BuildPulse

Most teams don't build their own issue trackers or databases. There are existing solutions and teams prefer to invest engineering time in their own core product instead. The same idea applies here. 😅

BuildPulse pays for itself in saved developer time. Instead of spending time building and maintaining a tool to detect, track, and rank flaky tests, BuildPulse does it for you.

To automate everything with BuildPulse:

  1. Start a free trial and follow the prompts to install BuildPulse on your repository.

  2. Add a few lines to your CI script to send your JUnit XML reports to BuildPulse. See our guides for your CI provider:

Then, BuildPulse will automatically analyze your test results to identify flaky tests. The dashboard presents a rich visualization of your flaky tests over time, and it highlights the most disruptive ones so you know exactly where to focus first for maximum impact.

List of repository's flaky tests

If you run into any trouble getting things set up, or if you have any questions at all, please get in touch.