TL;DR — Quick Summary

Complete LVM guide for Linux: extend and resize logical volumes, add disks, grow ext4/XFS filesystems, shrink LVs, thin provisioning, snapshots, and cloud VMs.

LVM (Logical Volume Manager) is the standard storage abstraction layer on Linux that lets you resize, snapshot, and migrate disk space without downtime. This guide covers the full LVM workflow: architecture fundamentals, adding disks, extending logical volumes, growing and shrinking filesystems, thin provisioning, snapshots, cloud VM disk growth, and the most common errors you will encounter in production.

Prerequisites

  • Linux system with lvm2 package installed (sudo apt install lvm2 or sudo dnf install lvm2).
  • Root or sudo access.
  • At least one existing LVM volume group, or a disk to initialize from scratch.
  • Basic familiarity with Linux disk partitioning (fdisk, lsblk, df).

LVM Architecture

LVM introduces three abstraction layers between raw disks and mounted filesystems:

LayerCommand prefixDescription
Physical Volume (PV)pv*A raw disk or partition initialized for LVM use
Volume Group (VG)vg*A pool of storage built from one or more PVs
Logical Volume (LV)lv*A virtual block device carved from VG free space

Physical Extents (PE) are the smallest allocation unit inside a VG (default 4 MiB). Every LV is a contiguous range of PEs. This abstraction is what allows LVs to span multiple disks and be resized at will.

The stack looks like this:

/dev/sda1  /dev/sdb1  /dev/sdc    ← physical disks / partitions
    └─── pvcreate ───────────┘

     Volume Group (vg_data)        ← pool of PEs

  lv_root  lv_home  lv_db          ← logical volumes (block devices)

  ext4     ext4     xfs            ← filesystems on top of LVs

Viewing the Current Layout

lsblk — block device tree

lsblk
# NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
# sda               8:0    0   50G  0 disk
# ├─sda1            8:1    0    1G  0 part /boot
# └─sda2            8:2    0   49G  0 part
#   ├─vg_sys-lv_root 253:0  0   20G  0 lvm  /
#   └─vg_sys-lv_home 253:1  0   29G  0 lvm  /home

pvdisplay, vgdisplay, lvdisplay

pvdisplay          # physical volumes: disk, size, PEs used/free
vgdisplay          # volume groups: total / used / free PEs
lvdisplay          # logical volumes: path, size, filesystem type

Quick one-liners:

pvs     # compact PV summary
vgs     # compact VG summary (look at VFree for available space)
lvs     # compact LV summary
df -h   # filesystem usage (what the OS sees)

Adding a New Disk to LVM

This is the most common production operation: a VM gets a new disk and you want to add that capacity to an existing VG.

Step 1 — Partition the new disk

fdisk /dev/sdb
# Inside fdisk:
# n   → new partition
# p   → primary
# 1   → partition number
# [Enter] twice → use full disk
# t   → change type
# 8e  → Linux LVM (use L to list all codes)
# w   → write and exit

For GPT disks (>2 TB or UEFI systems) use gdisk or parted. Set the partition type to Linux LVM.

Step 2 — Initialize the physical volume

pvcreate /dev/sdb1
# Physical volume "/dev/sdb1" successfully created.

Step 3 — Extend the volume group

vgextend vg_sys /dev/sdb1
# Volume group "vg_sys" successfully extended
vgs    # confirm VFree has increased

Step 4 — Extend the logical volume

# Add exactly 20 GB
lvextend -L +20G /dev/vg_sys/lv_root

# Or consume ALL free space in the VG
lvextend -l +100%FREE /dev/vg_sys/lv_root

Step 5 — Resize the filesystem

ext4 (online, no unmount needed):

resize2fs /dev/vg_sys/lv_root
# resize2fs 1.46.5
# The filesystem on /dev/vg_sys/lv_root is now 12582912 (4k) blocks long.

XFS (online, use mount point):

xfs_growfs /
# data blocks changed from 5242880 to 7864320

Extending an LV Using Existing Free Space in the VG

If you already have free PEs in the VG (visible in vgs under VFree), you skip the disk-add steps and go straight to lvextend:

vgs
# VG      #PV #LV #SN Attr   VSize   VFree
# vg_data   2   3   0 wz--n- 200.00g 45.00g   ← 45 GB free

lvextend -L +30G /dev/vg_data/lv_db
resize2fs /dev/vg_data/lv_db    # ext4
# or
xfs_growfs /var/lib/mysql        # XFS

You can combine lvextend and resize2fs in a single command with the -r flag:

lvextend -L +30G -r /dev/vg_data/lv_db

The -r flag calls the appropriate resize tool (resize2fs for ext4, xfs_growfs for XFS) automatically after extending the LV.


Online vs Offline Resizing

FilesystemGrow online?Shrink online?Notes
ext4YesNo — must unmountMost flexible; supports both grow and shrink
XFSYesNeverXFS cannot shrink at all — by design
BtrfsYesYesOnline grow and shrink both supported
ext3YesNoLegacy; prefer ext4
swapNoNoswapoff, resize LV, mkswap, swapon

Shrinking a Logical Volume (ext4 Only — Dangerous)

Shrinking is irreversible if done incorrectly. Always have a backup. XFS cannot be shrunk.

# 1. Unmount the filesystem
umount /dev/vg_data/lv_home

# 2. Check and repair the filesystem BEFORE resizing
e2fsck -f /dev/vg_data/lv_home

# 3. Shrink the filesystem to the target size (must be smaller than the LV)
resize2fs /dev/vg_data/lv_home 50G

# 4. Shrink the logical volume to match (must be >= filesystem size)
lvreduce -L 50G /dev/vg_data/lv_home

# 5. Remount and verify
mount /dev/vg_data/lv_home /home
df -h /home

Critical rule: always run resize2fs to shrink the filesystem BEFORE lvreduce. Shrinking the LV first truncates data and corrupts the filesystem.


Thin Provisioning

Thin provisioning lets you overcommit storage — allocate more space to LVs than physically exists in the VG, betting that not all space will be used simultaneously.

# Create a thin pool (100 GB pool inside a VG that may only have 50 GB free on disk)
lvcreate --thin -L 100G vg_data/thin_pool

# Create thin LVs from the pool (each "allocated" 30 GB but using only what they write)
lvcreate --thin -V 30G --name lv_web vg_data/thin_pool
lvcreate --thin -V 30G --name lv_db  vg_data/thin_pool

# Monitor actual pool usage
lvs -a vg_data
# Data%  shows how full the pool is — alert at 80%, act at 90%

Warning: if a thin pool fills completely, all LVs in the pool go read-only. Monitor pool usage with lvs or set up a threshold alert.


LVM Snapshots

Snapshots capture the state of an LV at a point in time using copy-on-write. They are the fastest way to take a consistent backup of a live server.

# Create a 5 GB snapshot of lv_root (COW space; not a copy of the full 20 GB LV)
lvcreate --snapshot -n lv_root_snap -L 5G /dev/vg_sys/lv_root

# Mount the snapshot read-only for backup
mount -o ro /dev/vg_sys/lv_root_snap /mnt/snap
tar -czf /backup/root-$(date +%Y%m%d).tar.gz -C /mnt/snap .
umount /mnt/snap

# Remove the snapshot when done
lvremove /dev/vg_sys/lv_root_snap

Merging a snapshot (rollback)

# Revert lv_root to the snapshot state on next boot
lvconvert --merge /dev/vg_sys/lv_root_snap
reboot

Snapshot capacity: if writes to the origin LV exceed the COW space (5 GB above), the snapshot becomes invalid. Size snapshots generously or use thin snapshots, which have no fixed COW budget.


LVM on Cloud VMs

When you extend a disk in Azure, AWS, or GCP, the OS sees the raw disk grow but the PV still reports the old size. Use pvresize to update LVM’s view:

# Azure example: disk extended from 30 GB to 50 GB in the portal
# Verify the OS sees the new disk size
lsblk        # /dev/sda should show 50G

# Resize the partition if needed (growpart from cloud-utils-growpart)
sudo growpart /dev/sda 2    # grow partition 2 of /dev/sda

# Tell LVM the PV is now larger
pvresize /dev/sda2

# Confirm free space appeared in the VG
vgs

# Extend the LV and grow the filesystem
lvextend -l +100%FREE -r /dev/vg_sys/lv_root

On AWS, disk hot-add also requires partprobe /dev/nvme1n1 to re-read the partition table after attaching the new EBS volume.


LVM RAID

LVM can manage software RAID without mdadm:

# Convert an existing LV to RAID 1 (mirror) across two PVs
lvconvert --type raid1 -m 1 /dev/vg_data/lv_db /dev/sdb1 /dev/sdc1

# Check sync progress
lvs -a -o name,copy_percent vg_data

RAID types available: raid1 (mirror), raid5, raid6, raid10. LVM RAID integrates with the normal lvextend / lvreduce workflow.


Real-World Scenario: Extending Root on a Running Server

You receive a disk-full alert on a production Ubuntu server. Root (/) is at 98% usage.

# 1. Check what is consuming space
df -h
du -sh /* 2>/dev/null | sort -hr | head -10

# 2. Check VG free space
vgs
# vg_sys    2   3   0 wz--n-  100.00g  12.00g    ← 12 GB free — use it

# 3. Extend lv_root by all free space
lvextend -l +100%FREE /dev/vg_sys/lv_root

# 4. Grow the ext4 filesystem online (no downtime)
resize2fs /dev/vg_sys/lv_root

# 5. Confirm
df -h /
# /dev/mapper/vg_sys-lv_root  112G   94G   12G  89% /

Total downtime: zero. Total time: under 30 seconds.

If the VG had no free space, you would attach a new disk in the cloud portal, run pvcreate + vgextend, then proceed from step 3.


Common Errors

Error messageCauseFix
Insufficient free spaceNot enough PEs in the VGAdd a new PV with pvcreate + vgextend, or use -l +100%FREE
Can't reduce LV below used spaceresize2fs target smaller than actual dataRun e2fsck -f first, then resize filesystem before lvreduce
Filesystem not showing new sizeresize2fs / xfs_growfs not run after lvextendRun resize2fs /dev/vg/lv or xfs_growfs /mountpoint
Device /dev/sdX not foundPartition table not re-readRun partprobe /dev/sdX then retry pvcreate
Volume group not foundVG metadata missing or disk not presentRun vgck, pvscan --cache, or check /etc/lvm/backup/
WARNING: snapshot is fullCOW space exhaustedGrow snapshot: lvextend -L +2G /dev/vg/snap or recreate it

LVM vs ZFS vs Btrfs vs mdadm vs Plain Partitions

FeatureLVMZFSBtrfsmdadmPlain partitions
Online resizeYesYesYesLimitedNo
SnapshotsYes (COW)Yes (COW)Yes (COW)NoNo
RAIDYesYes (native)YesYesNo
CompressionNoYesYesNoNo
DeduplicationNoYesPartialNoNo
Thin provisioningYesYesPartialNoNo
Learning curveMediumHighMediumMediumLow
Best forGeneral Linux serversNAS, FreeBSD, ZFS-nativeDesktop, Fedora, Btrfs-nativeLegacy RAIDSimple single-disk setups

Summary

  • LVM adds PV → VG → LV abstraction: resize and move storage without rebooting.
  • Use pvdisplay, vgdisplay, lvdisplay, and vgs to understand the current layout.
  • Add capacity: pvcreatevgextendlvextendresize2fs or xfs_growfs.
  • Growing ext4 and XFS is always online and safe; use -r flag on lvextend for one step.
  • Shrinking requires unmount + e2fsck + resize2fs before lvreduce. XFS cannot shrink.
  • Thin provisioning allows overcommit; monitor pool usage with lvs to avoid read-only LVs.
  • Snapshots are fast COW backups; always size them generously or use thin snapshots.
  • On cloud VMs, use pvresize after extending the disk in the portal.
  • -r on lvextend runs the filesystem resize automatically — fewer steps, same safety.