Docker Networking Guide — bridge, host, overlay

Docker Networking Overview

Docker containers run in an isolated network environment by default. Docker provides various network drivers for container-to-container communication, external network access, and service discovery.

As an analogy, each container is an independent apartment unit, and the Docker network is the communication system within the apartment complex. Only containers (units) on the same network (complex) can communicate directly.

DriverDescriptionPrimary Use
bridgeVirtual bridge inside the host (default)Container communication on a single host
hostUses host network directlyWhen network performance is critical
noneDisables networkingWhen complete isolation is needed
overlayVirtual network connecting multiple hostsDocker Swarm, multi-host
macvlanAssigns physical MAC addresses to containersDirect connection to physical network

bridge Network

When Docker is installed, a default bridge network called docker0 is created. However, the default bridge does not support DNS-based service discovery, so it’s recommended to create and use a user-defined bridge network.

# List default networks
docker network ls
# NETWORK ID     NAME      DRIVER    SCOPE
# a1b2c3d4e5f6   bridge    bridge    local
# f6e5d4c3b2a1   host      host      local
# 1234567890ab   none      null      local

# Create a user-defined bridge network
docker network create \
  --driver bridge \
  --subnet 172.20.0.0/16 \
  --gateway 172.20.0.1 \
  my-app-network

# Inspect network details
docker network inspect my-app-network
# [{ "Name": "my-app-network",
#    "Driver": "bridge",
#    "IPAM": { "Config": [{ "Subnet": "172.20.0.0/16", "Gateway": "172.20.0.1" }] }
# }]

With user-defined bridges, DNS lookup by container name is available.

# Run two containers on the same network
docker run -d --name web --network my-app-network nginx:alpine
docker run -d --name api --network my-app-network node:22-alpine sleep 3600

# Access the web container from the api container by name
docker exec api ping -c 3 web
# PING web (172.20.0.2): 56 data bytes
# 64 bytes from 172.20.0.2: seq=0 ttl=64 time=0.089 ms

# Verify DNS resolution
docker exec api nslookup web
# Name:      web
# Address 1: 172.20.0.2 web.my-app-network

# Container name DNS does NOT work on the default bridge
docker run -d --name test1 nginx:alpine
docker run -it --rm alpine ping -c 1 test1
# ping: bad address 'test1'  ← fails

Here are the differences between the default bridge and user-defined bridges.

FeatureDefault bridgeUser-defined bridge
DNS service discoveryNot supportedCommunicate by container name
Automatic connectionAuto-connected if not specifiedExplicit connection required
Isolation levelAll containers on the same networkIsolated per network
Network change while runningNot possibledocker network connect/disconnect

host Network

The host network lets containers directly use the host’s network stack. Since it uses the host’s ports directly without port mapping, there is no network overhead.

# Run with host network (no port mapping needed)
docker run -d --name web --network host nginx:alpine

# Access directly via the host's port 80
curl http://localhost:80
# <!DOCTYPE html>
# <html>
# <head><title>Welcome to nginx!</title></head>

# Check container's network interfaces (same as host)
docker exec web ip addr
# Host's eth0, lo, etc. are visible as-is

Use the host network in these cases:

  • When network performance matters (eliminates NAT overhead)
  • When the container uses many ports
  • When direct access to the host’s network configuration is needed

However, since it directly occupies the host’s ports, be careful of port conflicts, and network isolation between containers is not possible.

Docker Compose Networking

Docker Compose automatically creates a network for each project. A bridge network named {project-name}_default is created, and DNS communication by service name is available.

# docker-compose.yml
# Separate frontend, backend, and DB into different networks

services:
  # Frontend: external network + backend network
  nginx:
    image: nginx:alpine
    ports:
      - "80:80"
    networks:
      - frontend
      - backend
    depends_on:
      - api

  # Backend API: backend network + DB network
  api:
    build: ./api
    environment:
      # Access by service name 'db' (automatic DNS resolution)
      DATABASE_URL: "postgres://user:pass@db:5432/myapp"
      REDIS_URL: "redis://cache:6379"
    networks:
      - backend
      - database

  # Database: DB network only (blocks external access)
  db:
    image: postgres:16-alpine
    environment:
      POSTGRES_DB: myapp
      POSTGRES_USER: user
      POSTGRES_PASSWORD: pass
    volumes:
      - pgdata:/var/lib/postgresql/data
    networks:
      - database

  # Redis cache: backend network only
  cache:
    image: redis:7-alpine
    networks:
      - backend

# Network definitions
networks:
  frontend:
    driver: bridge
  backend:
    driver: bridge
  database:
    driver: bridge
    # Internal-only network (no external access)
    internal: true

volumes:
  pgdata:
# Run Compose
docker compose up -d

# Check created networks
docker network ls
# NETWORK ID     NAME                  DRIVER
# ...            myproject_frontend    bridge
# ...            myproject_backend     bridge
# ...            myproject_database    bridge

# Test db access from api container
docker compose exec api ping -c 1 db
# PING db (172.22.0.3): 56 data bytes
# 64 bytes from 172.22.0.3: time=0.056 ms

# Attempt to access db from nginx (fails because they're on different networks)
docker compose exec nginx ping -c 1 db
# ping: bad address 'db'  ← isolation working

In this configuration, setting internal: true on the database network blocks external internet access, strengthening database security.

overlay Network

Overlay networks connect containers across multiple Docker hosts. Primarily used in Docker Swarm mode.

# Initialize Swarm (manager node)
docker swarm init

# Create overlay network
docker network create \
  --driver overlay \
  --attachable \
  --subnet 10.0.10.0/24 \
  my-overlay-network

# Deploy service (using overlay network)
docker service create \
  --name web \
  --network my-overlay-network \
  --replicas 3 \
  nginx:alpine

# Containers on different hosts can communicate by name
# 10.0.10.2 (web.1 on Host A)
# 10.0.10.3 (web.2 on Host B)
# 10.0.10.4 (web.3 on Host C)

Adding the --attachable option allows not only Swarm services but also regular containers (docker run) to join the overlay network.

Network Debugging

Commands for diagnosing network issues.

# Check container IP address
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' web
# 172.20.0.2

# Test connectivity between containers
docker exec api curl -s http://web:80
# <!DOCTYPE html>...

# Check container network interface
docker exec web ip addr show eth0
# inet 172.20.0.2/16 scope global eth0

# List containers connected to a network
docker network inspect my-app-network --format '{{range .Containers}}{{.Name}} {{.IPv4Address}}{{"\n"}}{{end}}'
# web 172.20.0.2/16
# api 172.20.0.3/16

# Check port mappings
docker port web
# 80/tcp -> 0.0.0.0:8080

# Test DNS resolution
docker exec api getent hosts web
# 172.20.0.2  web

Practical Tips

  • Use user-defined networks: Always create user-defined networks instead of using the default bridge network. You get DNS service discovery, better isolation, and the ability to change networks while running.
  • Strengthen security through network separation: As in the Compose example above, separating frontend, backend, and database networks prevents the DB from being directly exposed externally. Use internal: true to also block external internet access.
  • Be careful with port binding: -p 3000:3000 binds to all interfaces (0.0.0.0). If access should be local only, restrict it with -p 127.0.0.1:3000:3000.
  • Network cleanup: Clean up unused networks with docker network prune. Leftover networks from testing can accumulate and cause IP range conflicts.
  • DNS cache: Docker’s built-in DNS server (127.0.0.11) handles container name resolution. Even if a container restarts and gets a new IP, accessing it by name automatically resolves to the new IP. Avoid hardcoding IPs and always use service names.

Was this article helpful?