Generating Node.js code coverage reports from Docker containers


Generating code coverage reports from multi-container Node.js Docker-based applications can be challenge. I’m working on a multi-container application that spawns multiple services with shared dependencies from a single code base. We run tests on CI, which spins up the system using Docker Compose, and we want to generate code coverage information from all Node.js containers, merge them, and generate single report out of them.

The approach I’ll briefly describe is somewhat based on my previous post: Code coverage reports with Puppeteer and Istanbul.

We only want to generate code coverage information when running automated tests, so the first step is to support building and running a container with or without code instrumentation out of the same Dockerfile.

We’ll do this by introducing a COVERAGE boolean environment variable:

$ cat docker-compose.yml
...
services:
  api:
    ...
    environment:
      - COVERAGE
    ...
...

If this variable is set, then the Dockerfile will instrument the code, and store it in another directory, using nyc. We will also copy package.json and package-lock.json so we can go into that directory, install dependencies, and run the app with npm start from there:

if [ -n "$COVERAGE" ]
then
  mkdir .nyc-root
  cp package.json .nyc-root
  cp package-lock.json .nyc-root

  nyc instrument lib .nyc-root/lib
fi

The Dockerfile entry point script will lastly run the app from the root or from .nyc-root depending on the value of COVERAGE:

if [ -n "$COVERAGE" ]
then
  cd .nyc-root
fi

NODE_ENV=production npm ci
npm start

This will allow us to run the application with instrumented code, but doesn’t give us any easy way to get the code coverage information out, for which we will need some runtime help.

When we run code that was instrumented through nyc, the runtime code coverage information lives in a global object named global['__coverage__'].

The Docker Compose “stop” command sends a SIGTERM signals to the container, which we can intercept and dump the code coverage information before exiting:

const uuid = require('uuid/v4')
const fs = require('fs')

process.on('exit', () => {
  const dump = JSON.stringify(global['__coverage__'], null, 2)
  fs.writeFileSync(`.nyc_output/${uuid()}.json`, dump, 'utf8')
})

// This is the signal that Docker Compose sends
// when doing "docker-compose stop"
process.on('SIGTERM', () => {
  process.exit(0)
})

If we now bring up our Docker Compose system with COVERAGE=1, then all the containers will start storing code coverage information at runtime. Running docker-compose stop will cause all containers to dump the report in their local .nyc_output directory.

There are two ways to get the reports back to the main test driver in order to generate the final report:

$ cat docker-compose.yml
...
volumes:
  .nyc_output: {}
...
services:
  api:
    ...

    volumes:
      - .nyc_output:/path/to/app/.nyc_output:rw
    ...
...
docker compose up api
docker cp api_1:/path/to/app/.nyc_output .nyc_output
docker compose stop api

We can then easily generate a proper report:

nyc --reporter=text --reporter=html report

The approach I’m using is slightly more complicated in that I make use of multi-stage Dockerfiles to generate and cache the instrumented code at build time, and only have the necessary files in the final container.