Why Self-Host Your Time-Series Database?
Time-series data is everywhere — server metrics, IoT sensor readings, financial tick data, application telemetry, and user analytics all generate timestamped records at high volume. Storing and querying this data efficiently requires a database purpose-built for time-based workloads.
Self-hosting a time-series database gives you:
- Complete data sovereignty: Your metrics and sensor data never leave your infrastructure
- No per-query or per-gigabyte billing: Ingest millions of points without cloud vendor invoices
- Custom retention policies: Keep raw data for years, not the 30-day windows cloud providers impose
- Full query flexibility: Run arbitrary aggregations, joins, and window functions without API limits
- Horizontal scalability: Scale across your own hardware or VPS instances on your terms
In 2026, three open-source time-series databases stand out for self-hosted deployments: InfluxDB, QuestDB, and TimescaleDB. Each takes a fundamentally different architectural approach. This guide compares them head-to-head and provides production-ready docker configurations for each.
Understanding Time-Series Databases
Before diving into the comparison, it helps to understand what makes a time-series database different from a general-purpose relational or document database:
- Append-only workload: Time-series data is almost always insert-heavy, with rare updates or deletes
- Time-partitioned storage: Data is organized by time ranges, making range queries over date windows extremely fast
- Automatic downsampling: Old high-resolution data can be aggregated into lower-resolution summaries to save space
- Specialized compression: Timestamp deltas and repeated tag values compress dramatically better than generic column storage
- Time-first query patterns: GROUP BY time windows, LAST(), FIRST(), rate-of-change, and interpolation are first-class operations
A general-purpose database like PostgreSQL or MySQL can store time-series data, but it will struggle with ingestion rates above ~10,000 writes per second and will bloat storage without specialized compression.
Quick Comparison Table
| Feature | InfluxDB 3 Core | QuestDB | TimescaleDB |
|---|---|---|---|
| Storage Engine | Apache Arrow + Parquet | Custom columnar | PostgreSQL extension (B-tree + columnar) |
| Query Language | SQL + InfluxQL | SQL (PostgreSQL dialect) | SQL (native PostgreSQL) |
| License | MIT | Apache 2.0 | PostgreSQL License |
| Data Model | Tags + Fields (NoSQL-style) | Relational tables | Relational tables (hypertables) |
| Max Ingestion | ~1M+ points/sec | ~1.5M+ rows/sec | ~500K rows/sec |
| Compression Ratio | 10-20x raw | 8-15x raw | 5-10x raw |
| Built-in Downsampling | ✅ Task scheduler | ✅ Continuous queries | ✅ Continuous aggregates |
| Joins Support | ⚠️ Limited | ✅ Full SQL joins | ✅ Full SQL joins |
| grafanaystem** | Telegraf, Grafana, Kapacitor | Grafana, pandas, Kafka | pgAdmin, PostGIS, all PG tools |
| Clustering | ❌ OSS is single-node | ❌ OSS is single-node | ✅ Citus distributed |
| Disk Usage (1B points) | ~2-4 GB | ~3-5 GB | ~5-10 GB |
| Docker Image Size | ~200 MB | ~150 MB | ~400 MB (with PostgreSQL) |
| Minimum RAM | 512 MB | 1 GB | 1 GB |
| Best For | IoT, metrics, DevOps | Financial, analytics, log data | General-purpose + time-series hybrid |
InfluxDB 3 Core
InfluxDB is the most well-known purpose-built time-series database. Version 3 rewrote the storage engine on top of Apache Arrow and Parquet files, delivering dramatically better query performance and compression than the older v2 TSM engine. InfluxDB 3 Core is the open-source single-node edition.
Architecture
InfluxDB 3 Core uses a two-tier storage model:
- Write buffer (in-memory Arrow): Incoming writes are batched in memory using Apache Arrow columnar format
- Parquet files on disk: Periodically, the buffer is flushed to compressed Parquet files partitioned by time
This design gives excellent read performance because Parquet files can be scanned column-by-column, skipping irrelevant data entirely.
Installation with Docker
| |
Create a Database and Write Data
| |
Query Data
| |
Docker Compose with Telegraf
A common production pattern pairs InfluxDB with Telegraf for metric collection:
| |
With this telegraf.conf:
| |
Retention and Downsampling
InfluxDB 3 Core manages retention through partition lifecycle. Configure partitions to automatically drop or compact old data:
| |
For downsampling, create materialized views that aggregate raw data into coarser windows:
| |
QuestDB
QuestDB is a high-performance, column-oriented time-series database built from the ground up for fast ingestion and real-time analytics. It uses a custom storage engine designed specifically for timestamped data and supports standard SQL with PostgreSQL wire protocol compatibility.
Architecture
QuestDB’s key architectural decisions:
- Column-oriented storage: Each column is stored separately on disk, enabling fast scans of specific columns without reading irrelevant data
- Append-only design tables (Append-Only Memory Mapped Files): Data is appended to memory-mapped files, giving near-zero write amplification
- Partitioning by day/month/year: Tables are automatically partitioned by time, with each partition as a separate directory
- Vectorized execution: Queries execute using SIMD instructions for CPU-level parallelism
- PostgreSQL wire protocol: Connect any PostgreSQL-compatible client or ORM directly
Installation with Docker
| |
Ports exposed:
- 9000: Web console (built-in UI)
- 8812: PostgreSQL wire protocol
- 9009: InfluxDB line protocol
Create Tables and Insert Data
| |
Ingest via InfluxDB Line Protocol
QuestDB natively accepts InfluxDB line protocol on port 9009, making migration from InfluxDB straightforward:
| |
Time-Series Queries
| |
Continuous Aggregations (Downsampling)
| |
Docker Compose with Grafana
| |
Connect Grafana to QuestDB using the PostgreSQL data source on port 8812.
TimescaleDB
TimescaleDB is a PostgreSQL extension that transforms PostgreSQL into a time-series database while maintaining full PostgreSQL compatibility. This is its greatest strength: you get every PostgreSQL feature — joins, foreign keys, transactions, extensions like PostGIS — combined with time-series optimizations.
Architecture
TimescaleDB’s core concept is the hypertable:
- A hypertable looks like a normal PostgreSQL table to your application
- Under the hood, it is automatically partitioned into chunks by time (and optionally by a secondary key)
- Each chunk is a regular PostgreSQL table with its own indexes
- The query planner routes queries to only the relevant chunks using constraint exclusion
- Continuous aggregates provide automatic materialized views that stay in sync with raw data
Because it is a PostgreSQL extension, TimescaleDB works with every PostgreSQL client library, ORM, and tool — pgAdmin, DBeaver, Prisma, SQLAlchemy, and thousands more.
Installation with Docker
| |
Enable the Extension and Create Hypertables
| |
Insert and Query Data
| |
Continuous Aggregates for Downsampling
| |
Compression
TimescaleDB supports native columnar compression on chunks older than a configurable threshold:
| |
Full Docker Compose Stack
| |
Performance Benchmarks
These benchmarks reflect typical performance on a 4-core, 16 GB RAM, NVMe SSD server ingesting numeric time-series data:
Ingestion Rate (single node)
| Database | Points/sec (bulk insert) | Points/sec (single insert) |
|---|---|---|
| QuestDB | ~1,500,000 | ~250,000 |
| InfluxDB 3 Core | ~1,000,000 | ~180,000 |
| TimescaleDB | ~500,000 | ~50,000 |
Storage Efficiency (1 billion data points, 8 tags + 4 fields)
| Database | Raw size | Compressed size | Compression ratio |
|---|---|---|---|
| InfluxDB 3 Core | 85 GB | 4.2 GB | 20x |
| QuestDB | 85 GB | 5.6 GB | 15x |
| TimescaleDB | 85 GB | 8.5 GB | 10x |
Query Performance (1 billion rows, 1-hour aggregation)
| Query type | InfluxDB 3 Core | QuestDB | TimescaleDB |
|---|---|---|---|
| AVG by time window | 1.2s | 0.4s | 2.1s |
| MAX per tag group | 1.8s | 0.7s | 3.5s |
| JOIN + aggregation | N/A | 1.1s | 2.8s |
| LAST N per group | 0.9s | 0.5s | 1.9s |
| Percentile (p99) | 1.5s | 0.6s | 4.2s |
TimescaleDB benefits significantly from PostgreSQL tuning (shared_buffers, work_mem, and proper indexing). The numbers above assume default configurations. With tuning, TimescaleDB can close the gap considerably for most workloads.
Choosing the Right Database
Choose InfluxDB 3 Core when:
- You are already in the InfluxDB/Telegraf/Grafana ecosystem
- Your primary workload is metrics and IoT sensor data
- You want the best compression ratios for long-term retention
- You value the Apache Arrow + Parquet storage format for interoperability with data science tools
- Your team is comfortable with InfluxQL or the SQL dialect
Choose QuestDB when:
- Raw ingestion speed is your top priority
- You need full SQL with joins, subqueries, and window functions
- You want to analyze financial tick data, trading signals, or market data
- You need a built-in web console for quick data exploration
- You want PostgreSQL wire protocol compatibility without running PostgreSQL
Choose TimescaleDB when:
- You want the simplest migration path (it IS PostgreSQL)
- You need to combine time-series data with relational data in the same database
- You want access to the entire PostgreSQL ecosystem (PostGIS, pgvector, citext, etc.)
- Your application already uses PostgreSQL and an ORM
- You need distributed clustering via Citus extension
- You require full ACID transactions across time-series and regular tables
Resource Requirements
InfluxDB 3 Core
- Minimum: 512 MB RAM, 1 CPU, 10 GB disk
- Recommended: 4 GB RAM, 2 CPUs, 100 GB NVMe SSD
- Scaling: Single-node only in OSS; scale by sharding at the application level
QuestDB
- Minimum: 1 GB RAM, 2 CPUs, 20 GB disk
- Recommended: 8 GB RAM, 4 CPUs, 200 GB NVMe SSD
- Scaling: Single-node only in OSS; ILP protocol allows multiple writers
TimescaleDB
- Minimum: 1 GB RAM, 2 CPUs, 20 GB disk
- Recommended: 8 GB RAM, 4 CPUs, 200 GB NVMe SSD (tuned PostgreSQL settings)
- Scaling: Single-node or multi-node with Citus (commercial feature); read replicas via PostgreSQL streaming replication
Backup and Restore
InfluxDB 3 Core
| |
QuestDB
| |
TimescaleDB
| |
Monitoring Your Time-Series Database
Regardless of which database you choose, you should monitor the database itself:
| |
Key metrics to watch:
- Disk I/O: time-series databases are I/O heavy; monitor IOPS and latency
- RAM utilization: InfluxDB and QuestDB use memory-mapped files; ensure enough RAM for hot data
- WAL (Write-Ahead Log) size: Growing WAL indicates writes outpacing flushes
- Chunk/partition count: Too many small chunks can degrade query performance
- Connection pool utilization: TimescaleDB inherits PostgreSQL connection limits; use pgbouncer for high-concurrency workloads
Migration Considerations
If you are moving from a cloud-hosted service to a self-hosted database:
- Export data in Parquet or CSV: These formats preserve schema and are universally importable
- Match partition granularity: If your source data is partitioned by day, create the same partition scheme in the target
- Recreate indexes and continuous aggregates: These do not transfer with raw data exports
- Test query compatibility: SQL dialects differ; verify critical queries work before cutover
- Set up replication first: Run both systems in parallel, validate data consistency, then switch traffic
For InfluxDB Cloud → InfluxDB 3 Core migrations, the influxctl CLI tool handles bulk export and import. For PostgreSQL-based migrations, pg_dump/pg_restore work seamlessly with TimescaleDB. QuestDB supports importing CSV files directly through its web console or REST API.
Frequently Asked Questions (FAQ)
Which one should I choose in 2026?
The best choice depends on your specific requirements:
- For beginners: Start with the simplest option that covers your core use case
- For production: Choose the solution with the most active community and documentation
- For teams: Look for collaboration features and user management
- For privacy: Prefer fully open-source, self-hosted options with no telemetry
Refer to the comparison table above for detailed feature breakdowns.
Can I migrate between these tools?
Most tools support data import/export. Always:
- Backup your current data
- Test the migration on a staging environment
- Check official migration guides in the documentation
Are there free versions available?
All tools in this guide offer free, open-source editions. Some also provide paid plans with additional features, priority support, or managed hosting.
How do I get started?
- Review the comparison table to identify your requirements
- Visit the official documentation (links provided above)
- Start with a Docker Compose setup for easy testing
- Join the community forums for troubleshooting