TL;DR — Quick Summary

LVM guide for Linux: extend and resize logical volumes, add disks, grow filesystems, manage snapshots, and resize cloud VM root partitions live.

Managing storage on Linux production servers means you will eventually face a full disk. Logical Volume Manager (LVM) was designed specifically for this moment: it lets you extend, shrink, snapshot, and migrate storage online — often without any downtime. This guide covers the complete LVM workflow from architecture fundamentals through cloud VM root expansion, with real commands you can run immediately.

Prerequisites

Before you begin, make sure you have:

  • A Linux system with lvm2 installed (apt install lvm2 or dnf install lvm2)
  • Root or sudo access
  • Basic familiarity with Linux block devices (lsblk, fdisk, df)
  • For cloud scenarios: ability to resize the disk in Azure Portal / AWS Console before touching Linux

LVM Architecture: PV → VG → LV

LVM adds three abstraction layers between physical disks and the filesystem:

LayerCommand prefixDescription
Physical Volume (PV)pv*A raw disk or partition initialised for LVM (pvcreate)
Volume Group (VG)vg*A pool of one or more PVs (vgcreate, vgextend)
Logical Volume (LV)lv*A slice of VG space that acts like a partition (lvcreate, lvextend)

The filesystem (ext4, XFS, etc.) lives inside an LV. The OS sees the LV as a block device (e.g. /dev/myvg/data) and has no knowledge of the underlying physical disks. This indirection is what enables online resizing and transparent migration between disks.

Check the Current LVM Layout

Always start by understanding what you have:

# List all Physical Volumes
pvs

# List all Volume Groups
vgs

# List all Logical Volumes
lvs

# Full detail on a specific VG
vgdisplay myvg

# Full detail on a specific LV
lvdisplay /dev/myvg/mylv

# Show block device tree (disks → partitions → LVs)
lsblk

# Show mounted filesystem usage
df -h

A typical pvs output looks like:

  PV         VG   Fmt  Attr PSize   PFree
  /dev/sda2  myvg lvm2 a--  <99.00g     0
  /dev/sdb   myvg lvm2 a--   50.00g 50.00g

The PFree column shows unallocated space in the VG — this is what lvextend draws from.

Adding a New Disk as a Physical Volume

When you attach a new disk (e.g. /dev/sdb in a VM, or a new EBS/managed disk), initialise it as a PV:

# Confirm the new disk is visible
lsblk

# Initialise the disk as a Physical Volume
pvcreate /dev/sdb

# Verify
pvs

You do not need to partition the disk first — using the whole disk (/dev/sdb) is fine and simpler. If you need a partition, create one first with fdisk or parted and use /dev/sdb1.

Extending a VG and LV

Step 1 — Add the PV to the VG

vgextend myvg /dev/sdb

Step 2 — Extend the LV

Consume all remaining free space in the VG:

lvextend -l +100%FREE /dev/myvg/mylv

Add a fixed amount (e.g. 10 GB):

lvextend -L +10G /dev/myvg/mylv

Set an absolute target size (e.g. exactly 50 GB):

lvextend -L 50G /dev/myvg/mylv

The -r flag combines lvextend with filesystem resize in one step (works for ext4 and XFS):

lvextend -l +100%FREE -r /dev/myvg/mylv

Resizing the Filesystem

After extending the LV, the filesystem must be told about the new space. This step is safe on a live, mounted filesystem.

ext4 (most Ubuntu/Debian systems)

resize2fs /dev/myvg/mylv

No size argument needed — resize2fs fills the entire LV. You can run this while the filesystem is mounted.

XFS (most RHEL/CentOS/Rocky systems)

XFS uses the mount point, not the device:

xfs_growfs /mountpoint
# or for root:
xfs_growfs /

Important: XFS can only grow, never shrink. Attempting to shrink an XFS filesystem is not supported and will fail.

After resizing, verify:

df -h /mountpoint

Extending the Root Partition on Cloud VMs (No Reboot)

This is the most common real-world scenario. You have a cloud VM (Azure or AWS) whose OS disk is running low on space. Here is the full sequence — no reboot required.

1. Resize the disk in the cloud console

  • Azure: Storage → Disks → select the OS disk → Disk size → increase → Save
  • AWS: EC2 → Volumes → Modify Volume → set new size → Modify

Wait for the resize to complete before proceeding.

2. Extend the partition on the Linux side

# Confirm the kernel sees the new size
lsblk

# Grow the partition (e.g. partition 2 on /dev/sda)
sudo growpart /dev/sda 2

# Verify the partition is now larger
lsblk

growpart is in the cloud-utils or cloud-guest-utils package. It extends the partition without destroying data.

3. Inform LVM about the new PV size

sudo pvresize /dev/sda2

4. Extend the LV

sudo lvextend -l +100%FREE /dev/ubuntu-vg/ubuntu-lv

Replace ubuntu-vg / ubuntu-lv with the actual VG and LV names from lvs.

5. Grow the filesystem

For ext4:

sudo resize2fs /dev/ubuntu-vg/ubuntu-lv

For XFS:

sudo xfs_growfs /

6. Confirm

df -h /

The root filesystem now shows the expanded size. No reboot needed at any step.

Shrinking a Logical Volume (Dangerous — ext4 Only)

Shrinking is riskier than extending and should only be done when you have a recent backup. XFS cannot be shrunk — this procedure applies only to ext4.

# 1. Unmount the filesystem (root cannot be shrunk live)
umount /dev/myvg/data

# 2. Check and repair filesystem before shrinking
e2fsck -f /dev/myvg/data

# 3. Shrink the filesystem FIRST (to the target size, e.g. 20G)
resize2fs /dev/myvg/data 20G

# 4. Shrink the LV to match (must be >= filesystem size)
lvreduce -L 20G /dev/myvg/data

# 5. Remount
mount /dev/myvg/data /mnt/data

# 6. Verify
df -h /mnt/data

Warning: Always shrink the filesystem before the LV. Shrinking the LV first destroys data by truncating the filesystem.

LVM Snapshots

Snapshots capture the state of an LV at a point in time. They are invaluable before risky operations (OS upgrades, database migrations).

# Create a 5 GB snapshot of /dev/myvg/mylv
lvcreate --snapshot -L 5G -n mylv_snap /dev/myvg/mylv

# Mount the snapshot read-only for inspection
mount -o ro /dev/myvg/mylv_snap /mnt/snapshot

# Roll back (merge snapshot into origin — origin must be unmounted or be the root LV)
lvconvert --merge /dev/myvg/mylv_snap

# Remove the snapshot without rolling back
lvremove /dev/myvg/mylv_snap

Snapshots are CoW (Copy-on-Write): they only consume space as the origin LV changes. Keep the snapshot LV smaller than the expected writes during the snapshot lifetime — if the snapshot runs out of space it becomes invalid.

Moving Data Between Physical Volumes

pvmove migrates extents between PVs while the LV stays online. Use this to drain a disk before removing it.

# Move all extents from /dev/sdb to any other PV in the VG
pvmove /dev/sdb

# Move extents of a specific LV only
pvmove -n /dev/myvg/mylv /dev/sdb /dev/sdc

# Monitor progress
lvs -a -o name,copy_percent

pvmove can be safely interrupted and resumed. After it completes:

# Remove the PV from the VG
vgreduce myvg /dev/sdb

# Remove LVM metadata from the disk
pvremove /dev/sdb

LVM vs ZFS vs Btrfs vs Plain Partitions

FeatureLVMZFSBtrfsPlain partitions
Online extendYesYesYesNo
Online shrinkext4 only (unmount)NoYesNo
SnapshotsYes (CoW)Yes (CoW)Yes (CoW)No
RAID built-inVia dm-raidYesYes (limited)Via mdadm
Transparent compressionNoYesYesNo
Self-healing (checksums)NoYesPartialNo
ComplexityMediumHighMediumLow
Distro supportUniversalUbuntu/ProxmoxFedora/openSUSEUniversal
Cloud VM defaultYes (RHEL/Ubuntu)Proxmox onlyFedora onlyLegacy

When to use LVM: You need flexible resizing on standard distros without learning a new storage model. It is the default on RHEL, Ubuntu Server, and most cloud images.

When to use ZFS: You need checksums, built-in RAID, or compression and are on Ubuntu or a storage appliance.

When to use Btrfs: You are on Fedora/openSUSE and want CoW without ZFS licensing questions.

Gotchas and Common Pitfalls

  • Resizing filesystem before LV when shrinking destroys data. Always shrink filesystem first.
  • Snapshot full = snapshot invalid. Monitor snapshot usage with lvs and expand if needed.
  • XFS cannot shrink. Attempting xfs_growfs with a smaller size fails with a clear error.
  • pvmove on an active root partition works but is slow; schedule during low traffic.
  • Forgetting pvresize after cloud disk resize means LVM still sees the old PV size and lvextend has nothing to consume.
  • LVM thin pools (thin provisioning) have separate tooling (lvconvert --thin, lvcreate --virtualsize). Overprovisioning without monitoring causes silent data loss when the pool exhausts.
  • LUKS + LVM ordering: For encrypted volumes, LUKS sits below LVM (/dev/sda2 → LUKS → dm-0 → PV). Resize LUKS with cryptsetup resize after extending the partition and before pvresize.

Summary

  • LVM adds three layers — Physical Volume, Volume Group, Logical Volume — enabling flexible online storage management.
  • Use pvs / vgs / lvs and lsblk to audit the current layout before making changes.
  • Extending is always safe: pvcreate → vgextend → lvextend → resize2fs / xfs_growfs.
  • Cloud VM root expansion requires growpart → pvresize → lvextend → resize2fs and no reboot.
  • Shrinking is only possible on ext4 and requires unmounting; always backup first.
  • Snapshots are cheap CoW captures — take one before every major change.
  • pvmove safely drains a PV for removal without downtime.
  • XFS only grows; ZFS and Btrfs offer richer features at higher complexity.