← All articles
DATA Time-Series Databases: InfluxDB, TimescaleDB, QuestD... 2026-02-09 · 6 min read · timeseries · influxdb · timescaledb

Time-Series Databases: InfluxDB, TimescaleDB, QuestDB, and Prometheus

Data 2026-02-09 · 6 min read timeseries influxdb timescaledb prometheus metrics

Time-Series Databases: InfluxDB, TimescaleDB, QuestDB, and Prometheus

Time-series data -- metrics, sensor readings, financial ticks, event logs -- is the fastest-growing data category in most organizations. Regular databases can store it, but they struggle with the write throughput, compression ratios, and time-range queries that time-series workloads demand. Purpose-built time-series databases solve these problems with specialized storage engines, automatic data lifecycle management, and query optimizations for temporal patterns.

This guide compares four leading options: InfluxDB (purpose-built TSDB), TimescaleDB (PostgreSQL extension), QuestDB (high-performance columnar), and Prometheus (monitoring-focused). Each makes different tradeoffs that matter for your use case.

Quick Comparison

Feature InfluxDB 3.0 TimescaleDB QuestDB Prometheus
Architecture Columnar (Apache Arrow) PostgreSQL extension Columnar (custom) Custom append-only
Query language SQL + InfluxQL SQL (full PostgreSQL) SQL (PostgreSQL wire) PromQL
Write protocol Line protocol, HTTP SQL INSERT, COPY ILP, HTTP, PostgreSQL Remote write, scrape
Compression Excellent Good Excellent Good
Cardinality handling Good (v3) Excellent Good Poor (high cardinality kills it)
Joins Limited Full SQL joins Limited No
Ecosystem Telegraf, Grafana pgAdmin, any PG tool Grafana, any PG client Grafana, Alertmanager
Best for Metrics, IoT Analytics on time + relational High-throughput ingest Infrastructure monitoring

InfluxDB: The Purpose-Built TSDB

InfluxDB is the most widely deployed time-series database. Version 3.0 (built on Apache Arrow and DataFusion) is a major rewrite that brings SQL support and dramatically better performance.

Data Model

InfluxDB organizes data into measurements (like tables), with tags (indexed metadata) and fields (values).

# Line protocol format
# measurement,tag1=val1,tag2=val2 field1=value1,field2=value2 timestamp

cpu,host=server01,region=us-east usage_idle=82.5,usage_system=12.3 1706400000000000000
cpu,host=server02,region=eu-west usage_idle=71.2,usage_system=18.7 1706400000000000000
temperature,sensor=warehouse-a,floor=1 value=22.4 1706400000000000000

Writing Data

# Write via HTTP API
curl -X POST "http://localhost:8086/api/v2/write?bucket=metrics&precision=ns" \
  -H "Authorization: Token ${INFLUX_TOKEN}" \
  -H "Content-Type: text/plain" \
  --data-raw "cpu,host=server01 usage_idle=82.5,usage_system=12.3"
// Using the Node.js client
import { InfluxDB, Point } from '@influxdata/influxdb-client';

const influx = new InfluxDB({ url: 'http://localhost:8086', token: process.env.INFLUX_TOKEN });
const writeApi = influx.getWriteApi('myorg', 'metrics', 'ns');

const point = new Point('cpu')
  .tag('host', 'server01')
  .tag('region', 'us-east')
  .floatField('usage_idle', 82.5)
  .floatField('usage_system', 12.3);

writeApi.writePoint(point);
await writeApi.close();

Querying

-- InfluxDB 3.0 supports SQL
SELECT host, mean(usage_idle) as avg_idle
FROM cpu
WHERE time >= now() - INTERVAL '1 hour'
GROUP BY host
ORDER BY avg_idle ASC;

-- Downsampling: 5-minute averages over the last day
SELECT
  date_bin('5 minutes', time) AS bucket,
  host,
  avg(usage_idle) AS avg_idle,
  max(usage_system) AS peak_system
FROM cpu
WHERE time >= now() - INTERVAL '1 day'
GROUP BY bucket, host
ORDER BY bucket;

When to Use InfluxDB

Use InfluxDB when you have a pure time-series workload (metrics, IoT sensors, application telemetry) and do not need to join time-series data with relational data. It excels at high write throughput, automatic retention policies, and time-based aggregations.

TimescaleDB: PostgreSQL for Time-Series

TimescaleDB is a PostgreSQL extension, which means you get full SQL, joins, transactions, and the entire PostgreSQL ecosystem. Your time-series data lives alongside your relational data in the same database.

Setup

-- Install the extension (after adding the TimescaleDB repo)
CREATE EXTENSION IF NOT EXISTS timescaledb;

-- Create a regular table
CREATE TABLE sensor_data (
  time        TIMESTAMPTZ NOT NULL,
  sensor_id   TEXT NOT NULL,
  location    TEXT,
  temperature DOUBLE PRECISION,
  humidity    DOUBLE PRECISION
);

-- Convert it to a hypertable (this is the magic)
SELECT create_hypertable('sensor_data', by_range('time'));

-- Add a compression policy (compress chunks older than 7 days)
ALTER TABLE sensor_data SET (
  timescaledb.compress,
  timescaledb.compress_segmentby = 'sensor_id'
);
SELECT add_compression_policy('sensor_data', INTERVAL '7 days');

-- Add a retention policy (drop data older than 1 year)
SELECT add_retention_policy('sensor_data', INTERVAL '1 year');

Querying with Time Buckets

TimescaleDB adds time-series functions on top of standard PostgreSQL.

-- Average temperature per hour, per sensor
SELECT
  time_bucket('1 hour', time) AS bucket,
  sensor_id,
  avg(temperature) AS avg_temp,
  min(temperature) AS min_temp,
  max(temperature) AS max_temp
FROM sensor_data
WHERE time > now() - INTERVAL '24 hours'
GROUP BY bucket, sensor_id
ORDER BY bucket DESC;

-- Join time-series data with relational data
SELECT
  s.sensor_id,
  m.building_name,
  m.floor_number,
  time_bucket('1 hour', s.time) AS bucket,
  avg(s.temperature) AS avg_temp
FROM sensor_data s
JOIN sensor_metadata m ON s.sensor_id = m.sensor_id
WHERE s.time > now() - INTERVAL '7 days'
GROUP BY s.sensor_id, m.building_name, m.floor_number, bucket
ORDER BY bucket DESC;

-- Continuous aggregates (materialized views that auto-refresh)
CREATE MATERIALIZED VIEW hourly_temps
WITH (timescaledb.continuous) AS
SELECT
  time_bucket('1 hour', time) AS bucket,
  sensor_id,
  avg(temperature) AS avg_temp,
  count(*) AS readings
FROM sensor_data
GROUP BY bucket, sensor_id;

SELECT add_continuous_aggregate_policy('hourly_temps',
  start_offset => INTERVAL '3 hours',
  end_offset => INTERVAL '1 hour',
  schedule_interval => INTERVAL '1 hour'
);

When to Use TimescaleDB

Use TimescaleDB when you need to correlate time-series data with relational data, when your team already knows PostgreSQL, or when you want a single database for both OLTP and time-series workloads. The continuous aggregates feature is particularly powerful for dashboards.

QuestDB: Maximum Ingest Performance

QuestDB is a columnar time-series database written in Java and C++, optimized for raw ingestion speed. It can ingest millions of rows per second on a single machine and supports the PostgreSQL wire protocol for querying.

Setup and Ingestion

# Docker
docker run -p 9000:9000 -p 9009:9009 -p 8812:8812 questdb/questdb

# Port 9000: Web console and REST API
# Port 9009: InfluxDB line protocol (ILP) for high-speed writes
# Port 8812: PostgreSQL wire protocol for SQL queries
// High-speed ingestion using ILP over TCP
import net from 'net';

const socket = net.createConnection(9009, 'localhost');

// Write 100k rows -- ILP is extremely fast over TCP
for (let i = 0; i < 100_000; i++) {
  const line = `sensors,id=sensor-${i % 1000} temperature=${20 + Math.random() * 10},humidity=${40 + Math.random() * 30} ${Date.now()}000000\n`;
  socket.write(line);
}

socket.end();

Querying

-- QuestDB uses SQL with time-series extensions
SELECT
  sensor_id,
  avg(temperature),
  min(temperature),
  max(temperature)
FROM sensors
WHERE timestamp IN '2026-02-09'
SAMPLE BY 1h
ALIGN TO CALENDAR;

-- ASOF JOIN: match each trade with the most recent quote
SELECT trades.timestamp, trades.symbol, trades.price, quotes.bid, quotes.ask
FROM trades
ASOF JOIN quotes ON (symbol);

-- LATEST ON: get the last reading per sensor
SELECT * FROM sensors
LATEST ON timestamp PARTITION BY sensor_id;

When to Use QuestDB

Use QuestDB when ingestion throughput is your primary concern -- financial tick data, high-frequency IoT sensors, or real-time analytics where you are writing millions of rows per second. The ASOF JOIN is particularly valuable for financial data where you need to correlate events that happen at slightly different times.

Prometheus: Monitoring-First

Prometheus is not a general-purpose TSDB. It is a monitoring system that happens to include a time-series database. Its pull-based architecture (Prometheus scrapes your services) and PromQL query language are designed specifically for infrastructure and application monitoring.

Configuration

# prometheus.yml
global:
  scrape_interval: 15s
  evaluation_interval: 15s

rule_files:
  - "alerts.yml"

scrape_configs:
  - job_name: 'api-servers'
    static_configs:
      - targets: ['api-1:9090', 'api-2:9090', 'api-3:9090']

  - job_name: 'node-exporter'
    static_configs:
      - targets: ['node-1:9100', 'node-2:9100']

  - job_name: 'kubernetes-pods'
    kubernetes_sd_configs:
      - role: pod
    relabel_configs:
      - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
        action: keep
        regex: true

Instrumenting Your Application

// Using prom-client for Node.js
import { Registry, Counter, Histogram, Gauge, collectDefaultMetrics } from 'prom-client';

const register = new Registry();
collectDefaultMetrics({ register });

const httpRequestDuration = new Histogram({
  name: 'http_request_duration_seconds',
  help: 'Duration of HTTP requests in seconds',
  labelNames: ['method', 'route', 'status_code'],
  buckets: [0.01, 0.05, 0.1, 0.25, 0.5, 1, 2.5, 5, 10],
  registers: [register],
});

const activeConnections = new Gauge({
  name: 'active_connections',
  help: 'Number of active connections',
  registers: [register],
});

// In your HTTP handler
app.use((req, res, next) => {
  const end = httpRequestDuration.startTimer();
  activeConnections.inc();
  res.on('finish', () => {
    end({ method: req.method, route: req.route?.path || 'unknown', status_code: res.statusCode });
    activeConnections.dec();
  });
  next();
});

// Expose /metrics endpoint
app.get('/metrics', async (req, res) => {
  res.set('Content-Type', register.contentType);
  res.end(await register.metrics());
});

PromQL Queries

# Request rate per second over the last 5 minutes
rate(http_request_duration_seconds_count[5m])

# 99th percentile latency
histogram_quantile(0.99, rate(http_request_duration_seconds_bucket[5m]))

# Error rate (5xx responses)
sum(rate(http_request_duration_seconds_count{status_code=~"5.."}[5m]))
/ sum(rate(http_request_duration_seconds_count[5m]))

# Alert: high error rate
# In alerts.yml
groups:
  - name: api
    rules:
      - alert: HighErrorRate
        expr: |
          sum(rate(http_request_duration_seconds_count{status_code=~"5.."}[5m]))
          / sum(rate(http_request_duration_seconds_count[5m]))
          > 0.05
        for: 5m
        labels:
          severity: critical
        annotations:
          summary: "Error rate above 5% for 5 minutes"

When to Use Prometheus

Use Prometheus for infrastructure and application monitoring. It is not designed for high-cardinality data (tracking per-user metrics will kill performance), long-term storage (use Thanos or Cortex for that), or general-purpose time-series analytics. But for monitoring, alerting, and Grafana dashboards, it is the standard.

Choosing the Right Tool

You want to monitor infrastructure and services? Prometheus. It is the industry standard, integrates with everything, and PromQL is purpose-built for monitoring queries.

You have high-cardinality time-series data alongside relational data? TimescaleDB. Full SQL, joins, and the PostgreSQL ecosystem. One database for everything.

You have a pure metrics/IoT workload with extreme write volume? QuestDB for maximum ingest speed, or InfluxDB for a mature ecosystem with Telegraf collectors.

You are building a financial data platform? QuestDB. Its ASOF JOIN and high-throughput columnar storage are designed for exactly this.

The biggest mistake teams make is choosing Prometheus for things it was not built for. Prometheus is a monitoring system, not a data warehouse. For analytics, use a real TSDB. For monitoring, use Prometheus. Do not try to make one tool do both jobs.