Checking Disk Usage
Running out of disk space is a major cause of service outages in server operations. Use df and du commands to monitor disk status.
df — Filesystem Usage
df (disk free) shows the overall usage of mounted filesystems.
# Output in human-readable format
df -h
# Filesystem Size Used Avail Use% Mounted on
# /dev/sda1 50G 32G 16G 67% /
# /dev/sdb1 200G 150G 40G 79% /data
# tmpfs 3.9G 0 3.9G 0% /dev/shm
# Filesystem info for a specific path
df -h /data
# Filesystem Size Used Avail Use% Mounted on
# /dev/sdb1 200G 150G 40G 79% /data
# Check inode usage (inodes can be exhausted before space with many small files)
df -i
# Filesystem Inodes IUsed IFree IUse% Mounted on
# /dev/sda1 3276800 245678 3031122 8% /
# Show filesystem type
df -Th
# Filesystem Type Size Used Avail Use% Mounted on
# /dev/sda1 ext4 50G 32G 16G 67% /
# /dev/sdb1 xfs 200G 150G 40G 79% /data
du — Per-Directory Usage
du (disk usage) shows the actual usage of directories and files.
# Total usage of current directory
du -sh /opt/my-app/
# 2.3G /opt/my-app/
# Per-subdirectory usage (1 level deep)
du -h --max-depth=1 /opt/
# 2.3G /opt/my-app
# 500M /opt/backups
# 150M /opt/scripts
# 2.9G /opt/
# Sort by size (largest first)
du -h --max-depth=1 /opt/ | sort -rh
# 2.9G /opt/
# 2.3G /opt/my-app
# 500M /opt/backups
# 150M /opt/scripts
# Find large files (over 100MB)
find / -type f -size +100M -exec ls -lh {} \; 2>/dev/null
# -rw-r--r-- 1 root root 250M Feb 10 backup-old.tar.gz
# -rw-r--r-- 1 deploy deploy 120M Mar 01 access.log.1
# Top 10 largest directories under a specific path
du -h --max-depth=2 /var/ 2>/dev/null | sort -rh | head -10
df reports at the filesystem level, while du reports at the directory level. The numbers from df and du may differ — if a deleted file is still held open by a process, df still shows it as in use. Restarting that process reclaims the space.
Partition Management
The process of dividing a disk into partitions and creating filesystems.
# List disks
lsblk
# NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
# sda 8:0 0 50G 0 disk
# └─sda1 8:1 0 50G 0 part /
# sdb 8:16 0 200G 0 disk
# └─sdb1 8:17 0 200G 0 part /data
# sdc 8:32 0 500G 0 disk ← new disk (unused)
# Disk details
sudo fdisk -l /dev/sdc
# Disk /dev/sdc: 500 GiB
# === Create partition on new disk (GPT) ===
# Create partition with parted (script-friendly)
sudo parted /dev/sdc --script mklabel gpt
sudo parted /dev/sdc --script mkpart primary ext4 0% 100%
# Create filesystem
sudo mkfs.ext4 /dev/sdc1
# mke2fs 1.46.5
# Creating filesystem with 131071744 4k blocks
# Filesystem UUID: a1b2c3d4-e5f6-7890-abcd-ef1234567890
# Create mount point and mount
sudo mkdir -p /mnt/storage
sudo mount /dev/sdc1 /mnt/storage
# Verify mount
df -h /mnt/storage
# Filesystem Size Used Avail Use% Mounted on
# /dev/sdc1 492G 73M 467G 1% /mnt/storage
fstab — Persistent Mount Configuration
Registering in /etc/fstab enables automatic mounting at boot. Device names (/dev/sdc1) can change, so using UUID is safer.
# Check partition UUID
sudo blkid /dev/sdc1
# /dev/sdc1: UUID="a1b2c3d4-e5f6-7890-abcd-ef1234567890" TYPE="ext4"
# Add to fstab (backup first!)
sudo cp /etc/fstab /etc/fstab.backup
# /etc/fstab format:
# device(UUID) mount-point filesystem options dump fsck-order
UUID=a1b2c3d4-e5f6-7890-abcd-ef1234567890 /mnt/storage ext4 defaults,noatime 0 2
# Test fstab changes (without reboot)
sudo mount -a
# Verify mount
mount | grep storage
# /dev/sdc1 on /mnt/storage type ext4 (rw,noatime)
Key mount options:
| Option | Description |
|---|---|
| defaults | Default combination of rw, suid, dev, exec, auto, nouser, async |
| noatime | Disable access time updates (improves performance) |
| nofail | Continue booting even if mount fails (useful for external disks) |
| ro | Read-only |
| noexec | Prevent execution of binaries (enhanced security) |
fstab configuration errors can cause boot failures. Always test with mount -a after changes. Adding the nofail option prevents boot interruption even if mounting fails.
LVM — Logical Volume Management
LVM (Logical Volume Manager) is an abstraction layer for flexible disk management. It enables live expansion of existing filesystems, making it essential for server operations.
LVM has a 3-layer structure.
| Layer | Abbreviation | Description |
|---|---|---|
| Physical Volume | PV | Physical disk or partition |
| Volume Group | VG | Storage pool combining PVs |
| Logical Volume | LV | Logical partition allocated from a VG |
# === LVM Setup Steps ===
# 1. Create Physical Volumes
sudo pvcreate /dev/sdc /dev/sdd
# Physical volume "/dev/sdc" successfully created.
# Physical volume "/dev/sdd" successfully created.
# Check PVs
sudo pvs
# PV VG Fmt Attr PSize PFree
# /dev/sdc lvm2 --- 500.00g 500.00g
# /dev/sdd lvm2 --- 500.00g 500.00g
# 2. Create Volume Group (combine PVs)
sudo vgcreate data-vg /dev/sdc /dev/sdd
# Volume group "data-vg" successfully created
# Check VGs
sudo vgs
# VG #PV #LV #SN Attr VSize VFree
# data-vg 2 0 0 wz--n- 999.99g 999.99g
# 3. Create Logical Volume
# Create with 500GB size
sudo lvcreate -n app-data -L 500G data-vg
# Logical volume "app-data" created.
# Use all remaining space in VG
sudo lvcreate -n backup-data -l 100%FREE data-vg
# Check LVs
sudo lvs
# LV VG Attr LSize Pool
# app-data data-vg -wi-a----- 500.00g
# backup-data data-vg -wi-a----- 499.99g
# 4. Create filesystem and mount
sudo mkfs.ext4 /dev/data-vg/app-data
sudo mkdir -p /data/app
sudo mount /dev/data-vg/app-data /data/app
LVM Volume Expansion
The process of expanding a volume with zero downtime.
# Add new disk → create PV → add to VG
sudo pvcreate /dev/sde
sudo vgextend data-vg /dev/sde
# Volume group "data-vg" successfully extended
# Extend LV (add 100GB)
sudo lvextend -L +100G /dev/data-vg/app-data
# Size of logical volume data-vg/app-data changed from 500.00 GiB to 600.00 GiB
# Extend filesystem (ext4: online expansion supported)
sudo resize2fs /dev/data-vg/app-data
# resize2fs: Filesystem at /dev/data-vg/app-data is mounted; on-line resizing required
# For xfs filesystems
# sudo xfs_growfs /data/app
# Verify expansion
df -h /data/app
# Filesystem Size Used Avail Use% Mounted on
# /dev/mapper/data--vg-app--data 591G 250G 316G 45% /data/app
You can run lvextend and resize2fs in one step. sudo lvextend -r -L +100G /dev/data-vg/app-data — the -r option automatically handles filesystem expansion.
Disk Health Monitoring
How to detect disk failures proactively.
# Check SMART status (smartmontools package)
sudo apt install -y smartmontools
sudo smartctl -H /dev/sda
# SMART overall-health self-assessment test result: PASSED
# Check I/O statistics
iostat -xh 1 3
# Device r/s w/s rkB/s wkB/s %util
# sda 10.5 25.3 420.0 1012.0 15.2%
# sdb 0.5 2.1 20.0 84.0 1.5%
# Disk usage warning script (register in cron)
#!/bin/bash
THRESHOLD=80
df -h | awk -v threshold="$THRESHOLD" '
NR>1 && +$5 >= threshold {
printf "[WARNING] %s usage %s (mount: %s)\n", $1, $5, $6
}
'
# [WARNING] /dev/sdb1 usage 79% (mount: /data)
Practical Tips
- noatime option: Disabling file access time recording reduces disk I/O. Since access time information is unnecessary on most servers, use
noatimeby default. - LVM snapshots: Use
lvcreate --snapshotto create point-in-time snapshots of an LV. Creating a snapshot before major updates enables instant recovery if issues arise. - Disk capacity alerts: Register a cron script that parses df output and sends Slack/email alerts when usage exceeds 80%. This lets you respond before a service outage due to full disks.
- tmpfs usage: For frequent temporary file I/O, using tmpfs (a memory-based filesystem) significantly improves performance. Configure in
/etc/fstabwithtmpfs /tmp tmpfs defaults,noatime,size=2G 0 0. - XFS vs ext4: XFS is advantageous for environments with large files and high I/O. However, XFS does not support shrinking, so keep this in mind when using it with LVM. ext4 supports both online shrinking and expansion.