Kubernetes Beginner's Guide — Core Concepts of Container Orchestration

Why You Need Kubernetes

Running a single container with Docker is simple. But in production, the questions change:

  • Who restarts a container when it dies?
  • How do you scale when traffic spikes?
  • How do you deploy new versions with zero downtime?
  • How do you manage networking across 10 services?

Kubernetes (K8s) is a container orchestration platform that automates all of this. It started when Google open-sourced 15 years of experience from their internal system (Borg).

Core Architecture

A Kubernetes cluster is divided into the Control Plane (the brain) and Worker Nodes (the hands).

ComponentRoleAnalogy
Control PlaneManages the entire cluster, scheduling, and state monitoringControl tower
API ServerEntry point for all requests (what kubectl communicates with)Reception desk
etcdDistributed key-value store that holds cluster stateDatabase
SchedulerDecides which Node to place a Pod onDispatch manager
Worker NodeServer where containers actually runFactory
kubeletAgent on each Node that manages PodsFloor supervisor

Six Key Resources

1. Pod — The Smallest Deployable Unit

A Pod is a group of one or more containers. Containers within the same Pod share networking and storage.

# pod.yaml — the simplest Pod definition
apiVersion: v1
kind: Pod
metadata:
  name: my-app
  labels:
    app: my-app          # Label: used by other resources to find this Pod
spec:
  containers:
    - name: app
      image: nginx:alpine
      ports:
        - containerPort: 80
      resources:
        requests:          # Minimum guaranteed resources
          memory: "64Mi"
          cpu: "100m"      # 0.1 CPU core
        limits:            # Maximum usage limit
          memory: "128Mi"
          cpu: "250m"
# Create a Pod
kubectl apply -f pod.yaml

# Check Pod status
kubectl get pods
# NAME     READY   STATUS    RESTARTS   AGE
# my-app   1/1     Running   0          30s

# View Pod details (events, IP, Node, etc.)
kubectl describe pod my-app

# Check Pod logs
kubectl logs my-app

# Delete a Pod
kubectl delete pod my-app

Directly creating Pods is rare. Use Deployments instead.

2. Deployment — Declarative Deployment Management

A Deployment lets you declare “always maintain 3 replicas of this app,” and Kubernetes handles the rest. If a Pod dies, it’s automatically recreated. Updates are performed via rolling deployments.

# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
spec:
  replicas: 3               # Always maintain 3 Pods
  selector:
    matchLabels:
      app: my-app            # Manage Pods with this label
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
        - name: app
          image: my-app:1.0
          ports:
            - containerPort: 3000
          env:
            - name: NODE_ENV
              value: "production"
          resources:
            requests:
              memory: "128Mi"
              cpu: "100m"
            limits:
              memory: "256Mi"
              cpu: "500m"
# Create/update a Deployment
kubectl apply -f deployment.yaml

# Check Deployment status
kubectl get deployments
# NAME     READY   UP-TO-DATE   AVAILABLE   AGE
# my-app   3/3     3            3           60s

# Update image (rolling deployment runs automatically)
kubectl set image deployment/my-app app=my-app:2.0

# Check rollout status
kubectl rollout status deployment/my-app
# Waiting for deployment "my-app" rollout to finish: 1 of 3 updated...
# deployment "my-app" successfully rolled out

# Rollback if there's an issue
kubectl rollout undo deployment/my-app

3. Service — Network Access Point

Pod IPs change each time they’re created. A Service provides a fixed address in front of multiple Pods and distributes traffic.

# service.yaml
apiVersion: v1
kind: Service
metadata:
  name: my-app-svc
spec:
  type: ClusterIP            # Accessible only within the cluster (default)
  selector:
    app: my-app              # Route traffic to Pods with this label
  ports:
    - port: 80               # Service port
      targetPort: 3000       # Container port on the Pod

Three types of Services:

TypeAccess ScopeUse Case
ClusterIPInternal cluster onlyCommunication between internal services (default)
NodePortExternal access via Node IP:PortDevelopment/testing
LoadBalancerAutomatically creates a cloud load balancerProduction external exposure

4. ConfigMap and Secret — Separating Configuration

Separate code and configuration to reuse the same image across dev/staging/prod.

# configmap.yaml — general configuration values
apiVersion: v1
kind: ConfigMap
metadata:
  name: app-config
data:
  APP_ENV: "production"
  LOG_LEVEL: "info"
  MAX_CONNECTIONS: "100"
---
# secret.yaml — sensitive information (base64 encoded)
apiVersion: v1
kind: Secret
metadata:
  name: app-secret
type: Opaque
data:
  DB_PASSWORD: cGFzc3dvcmQxMjM=    # echo -n "password123" | base64
  API_KEY: bXlzZWNyZXRrZXk=        # echo -n "mysecretkey" | base64

How to reference them in a Deployment:

# Add to deployment
spec:
  containers:
    - name: app
      envFrom:
        - configMapRef:
            name: app-config       # All ConfigMap keys as environment variables
        - secretRef:
            name: app-secret       # All Secret keys as environment variables

5. Ingress — HTTP Routing

Routes external HTTP requests to internal Services based on domain/path. You can expose multiple services through a single IP.

# ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: my-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
    - host: api.example.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: my-app-svc
                port:
                  number: 80
    - host: admin.example.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: admin-svc
                port:
                  number: 80

6. Namespace — Environment Separation

Logically separates resources within a single cluster.

# Create Namespaces
kubectl create namespace staging
kubectl create namespace production

# Deploy to a specific Namespace
kubectl apply -f deployment.yaml -n staging

# View resources by Namespace
kubectl get pods -n staging
kubectl get pods -n production
kubectl get pods --all-namespaces    # All namespaces

Frequently Used kubectl Commands

CommandDescription
kubectl get pods -o widePod list + Node/IP info
kubectl logs -f pod-nameStream logs in real time
kubectl exec -it pod-name -- shAccess shell inside a Pod
kubectl port-forward svc/my-app 8080:80Access Service locally (for debugging)
kubectl top podsPod CPU/memory usage
kubectl get events --sort-by=.lastTimestampRecent events (for troubleshooting)
kubectl diff -f deployment.yamlPreview changes before applying
kubectl scale deployment/my-app --replicas=5Manual scaling

Local Development Environment

You can practice Kubernetes locally without a production cluster.

ToolFeaturesInstallation
Docker DesktopBuilt-in K8s (enable in Settings)Install Docker Desktop, then Enable Kubernetes
minikubeSingle-node cluster, most widely usedbrew install minikube
kindRuns K8s inside Docker containers, great for CI/CDbrew install kind
# Start minikube
minikube start

# Open dashboard (web UI)
minikube dashboard

# Check status
kubectl cluster-info

Summary

ConceptKey PointComparison with Docker Compose
PodContainer execution unitA single entry under services:
DeploymentReplication, rolling updates, auto-recoverydeploy.replicas (Swarm)
ServiceFixed address + load balancingports: + internal DNS
ConfigMap/SecretConfiguration/secret separationenvironment: / .env
IngressDomain-based HTTP routingNginx reverse proxy
NamespaceEnvironment isolationSeparate compose files

Docker Compose is a tool for conveniently managing multiple containers on a single server, while Kubernetes is a platform for automatically managing hundreds to thousands of containers across multiple servers. Docker Compose is sufficient for small-scale services, but consider Kubernetes when you need high availability and auto-scaling.

Was this article helpful?