Docker Logging and Monitoring — Log Drivers, Prometheus, cAdvisor

Docker Logging Basics

The stdout/stderr output from Docker containers is collected through log drivers. The default log driver is json-file, which stores logs as JSON on the host filesystem.

# View container logs
docker logs my-app
# 2026-03-11T14:00:00.000Z  info  Server started on port 3000
# 2026-03-11T14:00:01.234Z  info  Connected to database
# 2026-03-11T14:00:05.678Z  warn  High memory usage: 85%

# View only the last 100 lines
docker logs --tail 100 my-app

# Stream logs in real time
docker logs -f my-app

# Include timestamps
docker logs -t my-app
# 2026-03-11T14:00:00.000000000Z  info  Server started on port 3000

# Filter by time range
docker logs --since "2026-03-11T14:00:00" --until "2026-03-11T15:00:00" my-app

# Check log file location
docker inspect --format='{{.LogPath}}' my-app
# /var/lib/docker/containers/abc123.../abc123...-json.log

With default settings, log files can grow indefinitely. In production, log rotation must be configured.

Log Driver Configuration

Docker supports various log drivers.

DriverStorage Locationdocker logs SupportKey Features
json-fileLocal JSON fileSupportedDefault, rotation configurable
localOptimized local fileSupportedMore efficient than json-file
syslogsyslog serverNot supportedSend to central log server
journaldsystemd journalSupportedSuitable for systemd environments
fluentdFluentd collectorNot supportedIntegration with EFK stack
awslogsCloudWatchNot supportedSuitable for AWS environments

json-file Log Rotation

// /etc/docker/daemon.json — global log configuration
{
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "50m",
    "max-file": "5",
    "compress": "true",
    "tag": "{{.ImageName}}/{{.Name}}"
  }
}
# Apply settings
sudo systemctl restart docker

# Per-container settings (overrides global settings)
docker run -d \
  --name api \
  --log-driver json-file \
  --log-opt max-size=100m \
  --log-opt max-file=10 \
  my-app:latest

# Log settings in Docker Compose
# docker-compose.yml
# services:
#   api:
#     image: my-app:latest
#     logging:
#       driver: json-file
#       options:
#         max-size: "50m"
#         max-file: "5"

The local driver is more disk-efficient than json-file and has rotation configured by default.

// /etc/docker/daemon.json
{
  "log-driver": "local",
  "log-opts": {
    "max-size": "50m",
    "max-file": "5"
  }
}

Structured Log Output

Outputting logs in JSON format from your application makes log analysis and searching easier.

// Node.js structured logging example (pino library)
const pino = require('pino');

const logger = pino({
  level: process.env.LOG_LEVEL || 'info',
  // Output in JSON format (compatible with Docker log drivers)
  formatters: {
    level: (label) => ({ level: label }),
  },
  // Include timestamps
  timestamp: pino.stdTimeFunctions.isoTime,
});

// Usage examples
logger.info({ userId: 123, action: 'login' }, 'User logged in');
// {"level":"info","time":"2026-03-11T14:00:00.000Z","userId":123,"action":"login","msg":"User logged in"}

logger.error({ err: error, requestId: 'abc' }, 'Request processing failed');
// {"level":"error","time":"2026-03-11T14:00:01.000Z","err":{"message":"...","stack":"..."},"requestId":"abc","msg":"Request processing failed"}

cAdvisor — Container Resource Monitoring

cAdvisor (Container Advisor) is a container monitoring tool developed by Google. It collects real-time CPU, memory, network, and disk usage for each container.

# Run cAdvisor
docker run -d \
  --name cadvisor \
  --privileged \
  -p 8080:8080 \
  -v /:/rootfs:ro \
  -v /var/run:/var/run:ro \
  -v /sys:/sys:ro \
  -v /var/lib/docker/:/var/lib/docker:ro \
  -v /dev/disk/:/dev/disk:ro \
  gcr.io/cadvisor/cadvisor:latest

# Web UI: http://localhost:8080
# API metrics: http://localhost:8080/api/v1.3/docker/
# Prometheus metrics endpoint: http://localhost:8080/metrics

While cAdvisor provides its own web UI, it’s common to integrate with Prometheus for long-term data storage and visualize with Grafana.

Prometheus + Grafana Monitoring Stack

Set up Prometheus (metrics collection/storage) + Grafana (visualization) + cAdvisor (container metrics) with Docker Compose.

# monitoring/docker-compose.yml
# Container monitoring stack

services:
  # Prometheus — metrics collection and storage
  prometheus:
    image: prom/prometheus:latest
    ports:
      - "9090:9090"
    volumes:
      - ./prometheus.yml:/etc/prometheus/prometheus.yml:ro
      - prometheus-data:/prometheus
    command:
      - '--config.file=/etc/prometheus/prometheus.yml'
      # Metrics retention period: 30 days
      - '--storage.tsdb.retention.time=30d'
    networks:
      - monitoring

  # Grafana — dashboard visualization
  grafana:
    image: grafana/grafana:latest
    ports:
      - "3000:3000"
    environment:
      - GF_SECURITY_ADMIN_USER=admin
      - GF_SECURITY_ADMIN_PASSWORD=${GRAFANA_PASSWORD}
    volumes:
      - grafana-data:/var/lib/grafana
    depends_on:
      - prometheus
    networks:
      - monitoring

  # cAdvisor — container metrics collection
  cadvisor:
    image: gcr.io/cadvisor/cadvisor:latest
    privileged: true
    volumes:
      - /:/rootfs:ro
      - /var/run:/var/run:ro
      - /sys:/sys:ro
      - /var/lib/docker/:/var/lib/docker:ro
    networks:
      - monitoring

  # Node Exporter — host system metrics
  node-exporter:
    image: prom/node-exporter:latest
    volumes:
      - /proc:/host/proc:ro
      - /sys:/host/sys:ro
      - /:/rootfs:ro
    command:
      - '--path.procfs=/host/proc'
      - '--path.sysfs=/host/sys'
      - '--path.rootfs=/rootfs'
    networks:
      - monitoring

networks:
  monitoring:
    driver: bridge

volumes:
  prometheus-data:
  grafana-data:

Prometheus Configuration

# monitoring/prometheus.yml
# Prometheus scrape target configuration

global:
  # Scrape metrics every 15 seconds
  scrape_interval: 15s
  # Evaluation interval
  evaluation_interval: 15s

scrape_configs:
  # Prometheus self-monitoring metrics
  - job_name: 'prometheus'
    static_configs:
      - targets: ['localhost:9090']

  # cAdvisor container metrics
  - job_name: 'cadvisor'
    static_configs:
      - targets: ['cadvisor:8080']

  # Node Exporter host metrics
  - job_name: 'node-exporter'
    static_configs:
      - targets: ['node-exporter:9100']

  # Application metrics (if the app provides a /metrics endpoint)
  - job_name: 'my-app'
    static_configs:
      - targets: ['api:3000']
    metrics_path: '/metrics'
# Start the monitoring stack
cd monitoring
docker compose up -d

# Prometheus: http://localhost:9090
# Grafana: http://localhost:3000 (admin / your configured password)

# PromQL query for container CPU usage in Prometheus
# rate(container_cpu_usage_seconds_total{name=~".+"}[5m])

# Container memory usage query
# container_memory_usage_bytes{name=~".+"}

Alert Configuration

Use Prometheus Alertmanager to send alerts when thresholds are exceeded.

# monitoring/alert-rules.yml
# Prometheus alert rules

groups:
  - name: container-alerts
    rules:
      # Container down alert
      - alert: ContainerDown
        # Alert if container has not been running for 5 minutes
        expr: absent(container_last_seen{name=~"myapp_.+"})
        for: 5m
        labels:
          severity: critical
        annotations:
          summary: "Container down: {{ $labels.name }}"

      # CPU usage over 80%
      - alert: HighCpuUsage
        expr: rate(container_cpu_usage_seconds_total{name=~".+"}[5m]) > 0.8
        for: 10m
        labels:
          severity: warning
        annotations:
          summary: "High CPU usage: {{ $labels.name }} ({{ $value }})"

      # Memory usage over 90%
      - alert: HighMemoryUsage
        expr: container_memory_usage_bytes{name=~".+"} / container_spec_memory_limit_bytes{name=~".+"} > 0.9
        for: 5m
        labels:
          severity: warning
        annotations:
          summary: "High memory usage: {{ $labels.name }}"

      # Disk usage over 85%
      - alert: HighDiskUsage
        expr: (node_filesystem_size_bytes - node_filesystem_avail_bytes) / node_filesystem_size_bytes > 0.85
        for: 10m
        labels:
          severity: warning
        annotations:
          summary: "Disk usage over 85%: {{ $labels.mountpoint }}"

docker stats — Simple Real-Time Monitoring

You can check container resources using just the Docker CLI without any additional tools.

# Real-time resource monitoring
docker stats
# CONTAINER ID  NAME   CPU %  MEM USAGE / LIMIT   MEM %  NET I/O        BLOCK I/O
# abc123        api    2.50%  256MiB / 512MiB      50%    1.2MB / 800kB  5MB / 2MB
# def456        db     1.20%  128MiB / 1GiB        12%    500kB / 1MB    50MB / 20MB
# ghi789        redis  0.10%  32MiB / 256MiB       12%    100kB / 50kB   0B / 0B

# Specific containers only (non-streaming, current state only)
docker stats --no-stream api db

# Custom format
docker stats --format "table {{.Name}}\t{{.CPUPerc}}\t{{.MemUsage}}\t{{.MemPerc}}"
# NAME   CPU %   MEM USAGE / LIMIT   MEM %
# api    2.50%   256MiB / 512MiB     50.00%
# db     1.20%   128MiB / 1GiB       12.50%

Summary

  • Log rotation is mandatory: Running production without log rotation will fill up the disk and crash the service. Always set max-size and max-file in daemon.json. The local driver has built-in rotation, making it convenient.
  • Structured logging: Outputting logs in JSON format enables field-level search and filtering in log analysis tools like EFK/ELK stacks or CloudWatch. It’s more operationally efficient than plain text logs.
  • Grafana dashboards: Import community dashboards for Docker monitoring (IDs: 193, 1860, etc.) and start using them immediately without additional setup. Go to Dashboards, then Import, and enter the ID.
  • Alert channels: Integrate Alertmanager with Slack, PagerDuty, email, etc. to receive incident alerts. Monitoring without alerts means you have to keep staring at dashboards, which defeats much of the purpose.
  • Retention period and storage: Set the Prometheus metrics retention period (--storage.tsdb.retention.time) according to your service scale. 30 days is sufficient for most cases. For long-term analysis, configure remote storage with Thanos or Mimir.

Was this article helpful?