Code coverage is one of the most widely tracked software quality metrics. Knowing which lines, branches, and functions your tests exercise gives teams confidence when shipping changes. But sending coverage reports to cloud SaaS platforms like Codecov, Coveralls, or SonarCloud means handing your proprietary source metrics to a third party.
For organizations with compliance requirements, air-gapped networks, or simply a preference for keeping data in-house, self-hosted code coverage tooling is the answer. This guide compares the three most popular open-source coverage collectors — JaCoCo (Java), Coverage.py (Python), and Istanbul/nyc (JavaScript/TypeScript) — and shows you how to run them in a self-hosted CI/CD pipeline with local reporting dashboards.
Why Self-Host Code Coverage
Running your own coverage infrastructure offers several advantages over SaaS alternatives:
- Data sovereignty — source-level coverage data stays within your infrastructure. No third party sees which code paths your tests hit.
- No rate limits or quotas — self-hosted tools process unlimited builds without per-repo or per-upload restrictions.
- Custom retention policies — keep coverage history for years, not months. Audit trails and compliance requirements often demand long-term data retention.
- Offline support — coverage collection works in air-gapped or restricted networks without external API access.
- Cost savings — SaaS coverage platforms charge per developer or per repository. Self-hosted tools are free and scale horizontally on your own hardware.
For teams already running self-hosted CI/CD platforms like Gitea Actions, Woodpecker CI, or Jenkins, integrating local coverage collection is a natural next step.
JaCoCo: Java Code Coverage
JaCoCo (Java Code Coverage) is the de facto standard for JVM languages. It uses Java agent instrumentation to track line, branch, and method coverage at runtime. The project has over 4,500 GitHub stars and was last updated in April 2026, demonstrating active maintenance.
Key Features
- Bytecode instrumentation via Java agent (
-javaagent) — no source modification needed - Maven and Gradle plugin integration
- HTML, XML, and CSV report formats
- Branch, line, method, and class-level coverage metrics
- Multi-module project support with aggregation
- Ant integration for legacy build systems
Docker Compose Setup for JaCoCo Reports
While JaCoCo itself is a Java library embedded in your build, you can self-host a reporting server to aggregate results from multiple builds. Here’s a Docker Compose configuration using SonarQube Community Edition as the coverage dashboard:
| |
Maven Configuration
Add the JaCoCo plugin to your pom.xml:
| |
Generate the report:
| |
The HTML report will be available at target/site/jacoco/index.html.
Gradle Configuration
For Gradle projects, apply the plugin in build.gradle.kts:
| |
Setting Up Quality Gates
In SonarQube, navigate to Quality Gates and create a gate that enforces minimum coverage thresholds:
| |
This ensures every merge request meets your team’s coverage standards. You can configure the SonarQube scanner in your CI pipeline:
| |
Coverage.py: Python Code Coverage
Coverage.py (3,360 GitHub stars, last updated April 2026) is the standard coverage tool for Python. It uses Python’s sys.settrace hook to monitor execution at the line level, making it lightweight and accurate.
Key Features
- Line-level coverage tracking via
sys.settrace - Branch coverage measurement
- HTML, XML (Cobertura format), JSON, and annotated source reports
- Parallel mode for distributed test execution
- Plugin architecture for custom tracer support
- Integration with pytest via
pytest-cov(2,031 GitHub stars)
Installation and Basic Usage
| |
Run your test suite with coverage:
| |
Generate HTML and XML reports:
| |
Docker-Based Coverage Collection
For Python projects running in containers, you can collect coverage during integration tests:
| |
pytest-cov Integration
The most common way to collect Python coverage is through pytest-cov:
| |
Combining Reports from Multiple Test Suites
For projects with separate unit and integration test suites:
| |
Pytest Configuration
Add coverage defaults to pyproject.toml:
| |
Istanbul/nyc: JavaScript and TypeScript Coverage
Istanbul is the leading JavaScript code coverage framework, and nyc (5,759 GitHub stars, updated February 2026) is its command-line interface. Istanbul instruments code at the AST level, providing accurate statement, branch, function, and line coverage for both Node.js and browser environments.
Key Features
- AST-based instrumentation — accurate even for transpiled code
- Supports statement, branch, function, and line coverage
- TypeScript support via
ts-nodeintegration - lcov, clover, cobertura, HTML, JSON, and text reports
- Coverage threshold enforcement with exit codes
- Per-file coverage reporting
- Caching for faster repeated runs
Installation
| |
Basic Usage with Mocha
| |
Jest Integration
Jest has built-in Istanbul support. Enable it in jest.config.js:
| |
Run tests with coverage:
| |
nyc Configuration for TypeScript Projects
For TypeScript projects using ts-node:
| |
Configure nyc.config.js:
| |
Run with:
| |
Docker Setup for JavaScript Coverage
Here’s a Docker Compose setup that runs tests with coverage and serves the HTML report:
| |
Add the test script to package.json:
| |
Comparison Table
| Feature | JaCoCo | Coverage.py | Istanbul/nyc |
|---|---|---|---|
| Language | Java, Kotlin, Scala | Python | JavaScript, TypeScript |
| Stars | 4,533 | 3,360 | 5,759 |
| Last Updated | April 2026 | April 2026 | February 2026 |
| Instrumentation | Bytecode agent (JVMTI) | sys.settrace hook | AST transformation |
| Line Coverage | Yes | Yes | Yes |
| Branch Coverage | Yes | Yes | Yes |
| Function Coverage | Method-level | Via functions extension | Yes |
| Condition Coverage | Yes | No | No |
| HTML Report | Yes | Yes | Yes |
| XML Report | Yes | Yes (Cobertura format) | Yes (lcov, clover) |
| Threshold Enforcement | Via SonarQube | fail_under config | check-coverage flag |
| Parallel Mode | Multi-module aggregation | --parallel-mode | --all with cache |
| CI Integration | Maven/Gradle plugins | pytest-cov | Jest built-in, nyc CLI |
| Docker Support | Via build containers | First-class | First-class |
| Source Maps | N/A (compiled bytecode) | N/A | Yes |
| Minimum Version Enforcement | Via Maven/Gradle rules | fail_under in config | check-coverage in config |
Self-Hosted Coverage Aggregation Dashboard
Collecting coverage in individual CI runs is only half the story. To track trends, enforce quality gates, and visualize progress over time, you need a self-hosted aggregation platform. Here are the most popular options:
SonarQube Community Edition
The most widely used self-hosted code quality platform. Supports coverage data from JaCoCo, Coverage.py (via Cobertura XML), and Istanbul (via lcov).
| |
Codecov Self-Hosted
Codecov offers an enterprise self-hosted version that provides the same dashboard experience as their SaaS product, but running on your infrastructure.
| |
Custom Coverage Dashboard with Grafana
For teams already running Grafana and Prometheus, you can parse coverage XML reports and push metrics to Prometheus:
| |
CI/CD Pipeline Integration
Here’s a complete GitHub Actions / Woodpecker CI pipeline that collects coverage and uploads it to a self-hosted SonarQube:
| |
Best Practices for Self-Hosted Coverage
- Set realistic thresholds — 80% line coverage is a good baseline. Require higher thresholds (90%+) for critical modules like authentication and payment processing.
- Measure branch coverage, not just line coverage — a line can be “covered” while an
ifbranch inside it never executes. Branch coverage catches this gap. - Fail CI on coverage regression — configure your pipeline to block merges that decrease overall coverage percentage. Use tools like
nyc --check-coverageor Coverage.py’sfail_under. - Exclude generated code — never count auto-generated files (migrations, protobuf stubs, OpenAPI clients) in coverage calculations. Use
omit/excludepatterns. - Combine test suite coverage — run unit, integration, and end-to-end test coverage collection separately, then merge results for a complete picture. For Python, use
coverage combine; for JavaScript, point nyc to the same output directory. - Track coverage trends over time — a single percentage is less useful than a trend. Use your self-hosted dashboard to show coverage trajectory across sprints and releases.
For related reading, see our SonarQube vs Semgrep vs CodeQL code quality guide for broader static analysis strategies, the complete CI/CD platforms comparison for pipeline setup, and our E2E testing tools guide for testing strategies that complement coverage metrics.
FAQ
What is the difference between line coverage and branch coverage?
Line coverage measures whether each executable line of code was run at least once during testing. Branch coverage goes further — it checks whether each conditional branch (true/false paths in if, else, switch statements) was exercised. A line containing if (x > 0) could have 100% line coverage while only the true branch is tested. Branch coverage catches this, making it a stricter and more meaningful metric.
Can I use one coverage reporting tool for multiple programming languages?
Yes. SonarQube Community Edition accepts coverage data from JaCoCo (Java), Coverage.py (Python, via Cobertura XML), Istanbul/nyc (JavaScript, via lcov), and many others. This makes it a unified self-hosted dashboard for polyglot projects. Alternatively, Grafana + Prometheus can display coverage from any tool that can output to a structured format.
How do I combine coverage reports from multiple CI runners?
For Python, run coverage run --parallel-mode on each runner, then coverage combine to merge. For JavaScript/nyc, all runners write to the same .nyc_output directory (shared via CI artifact storage), and nyc report merges automatically. For Java/JaCoCo, use the jacoco:merge Maven goal or Gradle’s JacocoMerge task to aggregate .exec files from parallel test suites.
Should I aim for 100% code coverage?
No. 100% coverage is rarely practical and often counterproductive. It encourages writing tests that assert trivial code (getters, setters, boilerplate) rather than meaningful behavior. A realistic target is 80-90% for application code, with critical modules (authentication, financial calculations, data validation) pushed higher. Focus on testing behavior, not metrics.
How do I exclude files from coverage calculations?
Each tool has its own exclusion syntax. For Coverage.py, use omit = ["tests/*", "*/migrations/*"] in pyproject.toml. For nyc, set exclude: ['**/*.test.js', '**/__mocks__/**'] in the config. For JaCoCo, configure <excludes> in the Maven plugin or use **/*Test* patterns. Always exclude test files themselves, generated code, and third-party dependencies.
Is self-hosted coverage better than using Codecov or Coveralls?
It depends on your priorities. Self-hosted solutions offer complete data control, no per-repo pricing, unlimited retention, and offline operation. SaaS platforms offer zero-infrastructure setup, pull request annotations, and polished UIs out of the box. For organizations handling sensitive code, operating in restricted networks, or managing dozens of repositories, self-hosted coverage typically provides better value and compliance.