RED HAT ENTERPRISE LINUX

Logical Volume Manager

Flexible Storage with LVM

College-Level Course Module | RHEL System Administration

Learning Objectives

1
Understand LVM architecture and concepts

Physical volumes, volume groups, logical volumes, and physical extents

2
Create and manage LVM components

Use pvcreate, vgcreate, lvcreate to build LVM storage

3
Extend and resize logical volumes

Add space to volume groups and grow logical volumes and filesystems

4
Create filesystems and swap on LVM

Format logical volumes and configure persistent mounts

Why LVM?

LVM (Logical Volume Manager) provides a layer of abstraction between physical storage and filesystems, enabling flexible storage management.

Traditional Partitions

  • Fixed size at creation
  • Difficult to resize
  • Limited to single disk
  • Resize requires unmount
  • Complex to add space

LVM Volumes

  • Resize on the fly
  • Grow while mounted (XFS)
  • Span multiple disks
  • Add disks without downtime
  • Snapshots for backup
Real scenario: /var is full. With partitions: backup, unmount, resize partition, resize filesystem, restore. With LVM: lvextend, resize filesystem - done in seconds, no downtime.

LVM Architecture

Filesystems / Swap
/home
(ext4)
/var
(xfs)
swap
Logical Volumes (LV)
lv_home
20GB
lv_var
15GB
lv_swap
4GB
Volume Group (VG)
vg_data (50GB total)
Physical Volumes (PV)
/dev/sdb
30GB
/dev/sdc
20GB
Physical Disks/Partitions
sdb
sdc

Physical Extents

LVM divides physical volumes into fixed-size chunks called Physical Extents (PE). Logical volumes are allocated in units of extents.

Volume Group divided into Physical Extents (4MB each)
LV1
LV1
LV1
LV1
LV1
LV2
LV2
LV2
LV1 (5 PEs = 20MB)   LV2 (3 PEs = 12MB)   Free (4 PEs = 16MB)
# Default PE size is 4MB. Check with:
[root@server ~]# vgdisplay vg_data | grep "PE Size"
  PE Size               4.00 MiB

# Create VG with custom PE size (must be power of 2)
[root@server ~]# vgcreate -s 16M vg_large /dev/sdb
PE Size matters: Larger PEs reduce metadata but waste space with small LVs. Default 4MB works for most cases. For very large storage, consider 16MB or 32MB.

LVM Device Naming

# Logical volumes appear as block devices in two locations:

# 1. Device mapper path (used internally)
/dev/dm-0
/dev/dm-1

# 2. Friendly paths via symbolic links
/dev/vgname/lvname           # Traditional format
/dev/mapper/vgname-lvname    # Device mapper format

# Example: LV "lv_data" in VG "vg_storage"
/dev/vg_storage/lv_data
/dev/mapper/vg_storage-lv_data

# View the links
[root@server ~]# ls -l /dev/vg_storage/
lrwxrwxrwx. 1 root root 7 Jan 20 10:00 lv_data -> ../dm-2

[root@server ~]# ls -l /dev/mapper/
lrwxrwxrwx. 1 root root 7 Jan 20 10:00 vg_storage-lv_data -> ../dm-2

# Both paths work identically
[root@server ~]# mount /dev/vg_storage/lv_data /mnt/data
[root@server ~]# mount /dev/mapper/vg_storage-lv_data /mnt/data
Best practice: Use /dev/mapper/vgname-lvname format in /etc/fstab - it's consistent and clearly shows both VG and LV names.

Viewing LVM Information

# View physical volumes
[root@server ~]# pvs
  PV         VG      Fmt  Attr PSize   PFree
  /dev/sda2  rhel    lvm2 a--  <99.00g    0 
  /dev/sdb   vg_data lvm2 a--  50.00g 10.00g

# View volume groups
[root@server ~]# vgs
  VG      #PV #LV #SN Attr   VSize   VFree
  rhel      1   2   0 wz--n- <99.00g    0 
  vg_data   1   2   0 wz--n-  50.00g 10.00g

# View logical volumes
[root@server ~]# lvs
  LV      VG      Attr       LSize   Pool Origin Data%  Meta%
  root    rhel    -wi-ao---- <95.00g                    
  swap    rhel    -wi-ao----   4.00g                    
  lv_data vg_data -wi-a-----  40.00g

# Detailed information
[root@server ~]# pvdisplay /dev/sdb
[root@server ~]# vgdisplay vg_data
[root@server ~]# lvdisplay /dev/vg_data/lv_data

# See everything with lsblk
[root@server ~]# lsblk
Command pattern: pvs/vgs/lvs for summary, pvdisplay/vgdisplay/lvdisplay for details. Same pattern: pv*, vg*, lv*.

Creating Physical Volumes

# Check available disks
[root@server ~]# lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
sdb      8:16   0   50G  0 disk              # Available!
sdc      8:32   0   30G  0 disk              # Available!

# Initialize disks as physical volumes
[root@server ~]# pvcreate /dev/sdb
  Physical volume "/dev/sdb" successfully created.

[root@server ~]# pvcreate /dev/sdc
  Physical volume "/dev/sdc" successfully created.

# Or create multiple at once
[root@server ~]# pvcreate /dev/sdb /dev/sdc

# Can also use partitions (type 8e recommended but not required)
[root@server ~]# pvcreate /dev/sdd1

# Verify
[root@server ~]# pvs
  PV         VG   Fmt  Attr PSize  PFree
  /dev/sdb        lvm2 ---  50.00g 50.00g
  /dev/sdc        lvm2 ---  30.00g 30.00g
Warning: pvcreate writes to the beginning of the device, destroying any existing data or partition table. Double-check the device name!

Creating Volume Groups

# Create volume group with one PV
[root@server ~]# vgcreate vg_data /dev/sdb
  Volume group "vg_data" successfully created

# Create volume group with multiple PVs
[root@server ~]# vgcreate vg_storage /dev/sdb /dev/sdc
  Volume group "vg_storage" successfully created

# Create with custom PE size
[root@server ~]# vgcreate -s 16M vg_large /dev/sdb

# Verify
[root@server ~]# vgs
  VG         #PV #LV #SN Attr   VSize  VFree
  vg_storage   2   0   0 wz--n- 79.99g 79.99g

[root@server ~]# vgdisplay vg_storage
  --- Volume group ---
  VG Name               vg_storage
  System ID             
  Format                lvm2
  VG Size               79.99 GiB
  PE Size               4.00 MiB
  Total PE              20478
  Alloc PE / Size       0 / 0   
  Free  PE / Size       20478 / 79.99 GiB
  VG UUID               abc123-def4-5678-90ab-cdef12345678
Naming convention: Use descriptive VG names like vg_data, vg_mysql, vg_backup. The "vg_" prefix makes LVM objects easy to identify.

Creating Logical Volumes

# Create LV with specific size
[root@server ~]# lvcreate -n lv_data -L 20G vg_storage
  Logical volume "lv_data" created.

# Create LV using percentage of VG free space
[root@server ~]# lvcreate -n lv_logs -l 50%FREE vg_storage
  Logical volume "lv_logs" created.

# Create LV using all remaining space
[root@server ~]# lvcreate -n lv_backup -l 100%FREE vg_storage

# Create LV with specific number of extents
[root@server ~]# lvcreate -n lv_small -l 100 vg_storage  # 100 PEs

# Verify
[root@server ~]# lvs
  LV        VG         Attr       LSize  
  lv_data   vg_storage -wi-a----- 20.00g
  lv_logs   vg_storage -wi-a----- 29.99g

# The LV is now available as a block device
[root@server ~]# ls -l /dev/vg_storage/lv_data
lrwxrwxrwx. 1 root root 7 Jan 20 10:00 /dev/vg_storage/lv_data -> ../dm-2
Size options: -L for absolute size (20G, 500M), -l for extents or percentage (100, 50%FREE, 100%VG).

Filesystems on LVM

# Create XFS filesystem on logical volume
[root@server ~]# mkfs.xfs /dev/vg_storage/lv_data
meta-data=/dev/vg_storage/lv_data isize=512    agcount=4, agsize=1310720 blks
data     =                       bsize=4096   blocks=5242880, imaxpct=25
...

# Create ext4 filesystem
[root@server ~]# mkfs.ext4 /dev/vg_storage/lv_logs

# Create mount point and mount
[root@server ~]# mkdir /data
[root@server ~]# mount /dev/vg_storage/lv_data /data

# Verify
[root@server ~]# df -h /data
Filesystem                      Size  Used Avail Use% Mounted on
/dev/mapper/vg_storage-lv_data   20G   33M   20G   1% /data

# Get UUID for fstab
[root@server ~]# blkid /dev/vg_storage/lv_data
/dev/vg_storage/lv_data: UUID="abc123-..." TYPE="xfs"

# Add to /etc/fstab (UUID or mapper path)
/dev/mapper/vg_storage-lv_data  /data  xfs  defaults  0  0
# Or with UUID:
UUID=abc123-...  /data  xfs  defaults  0  0

Swap on LVM

# Create logical volume for swap
[root@server ~]# lvcreate -n lv_swap -L 4G vg_storage
  Logical volume "lv_swap" created.

# Format as swap
[root@server ~]# mkswap /dev/vg_storage/lv_swap
Setting up swapspace version 1, size = 4 GiB (4294963200 bytes)
no label, UUID=swap-uuid-1234-5678-90ab-cdef

# Activate swap
[root@server ~]# swapon /dev/vg_storage/lv_swap

# Verify
[root@server ~]# swapon --show
NAME                          TYPE      SIZE USED PRIO
/dev/dm-1                     partition   4G   0B   -2
/dev/mapper/vg_storage-lv_swap partition  4G   0B   -3

# Add to /etc/fstab
/dev/mapper/vg_storage-lv_swap  none  swap  defaults  0  0

# Test fstab entry
[root@server ~]# swapoff /dev/vg_storage/lv_swap
[root@server ~]# swapon -a
[root@server ~]# swapon --show
LVM swap advantage: Need more swap? Extend the LV, run mkswap again (after swapoff), and swapon. Or create additional swap LVs.

Extending Volume Groups

# Check current VG size
[root@server ~]# vgs vg_storage
  VG         #PV #LV #SN Attr   VSize  VFree
  vg_storage   1   3   0 wz--n- 49.99g  5.99g

# New disk available - prepare as PV
[root@server ~]# pvcreate /dev/sdd
  Physical volume "/dev/sdd" successfully created.

# Add PV to existing volume group
[root@server ~]# vgextend vg_storage /dev/sdd
  Volume group "vg_storage" successfully extended

# Verify - VG is now larger
[root@server ~]# vgs vg_storage
  VG         #PV #LV #SN Attr   VSize   VFree
  vg_storage   2   3   0 wz--n- 99.99g  55.99g

[root@server ~]# pvs
  PV         VG         Fmt  Attr PSize  PFree
  /dev/sdc   vg_storage lvm2 a--  49.99g  5.99g
  /dev/sdd   vg_storage lvm2 a--  50.00g 50.00g
No downtime! vgextend adds space while the VG is in use. Logical volumes and filesystems continue running.

Extending Logical Volumes

# Check current LV and VG free space
[root@server ~]# lvs
  LV      VG         Attr       LSize 
  lv_data vg_storage -wi-ao---- 20.00g
[root@server ~]# vgs vg_storage
  VG         VSize   VFree
  vg_storage 99.99g  55.99g

# Extend LV by specific amount
[root@server ~]# lvextend -L +10G /dev/vg_storage/lv_data
  Size of logical volume vg_storage/lv_data changed from 20.00 GiB to 30.00 GiB.
  Logical volume vg_storage/lv_data successfully resized.

# Extend LV to specific size
[root@server ~]# lvextend -L 50G /dev/vg_storage/lv_data

# Extend to use all free space in VG
[root@server ~]# lvextend -l +100%FREE /dev/vg_storage/lv_data

# Extend by percentage of current size
[root@server ~]# lvextend -l +50%LV /dev/vg_storage/lv_data

# Verify
[root@server ~]# lvs /dev/vg_storage/lv_data
  LV      VG         Attr       LSize 
  lv_data vg_storage -wi-ao---- 30.00g
Not done yet! lvextend only grows the LV. The filesystem still needs to be resized to use the new space.

Resizing Filesystems

# After lvextend, grow the XFS filesystem
[root@server ~]# xfs_growfs /data    # Note: mount point, not device
meta-data=/dev/mapper/vg_storage-lv_data isize=512    agcount=4
data blocks changed from 5242880 to 7864320

# For ext4: use resize2fs with device
[root@server ~]# resize2fs /dev/vg_storage/lv_logs
Resizing the filesystem on /dev/vg_storage/lv_logs to 7864320 (4k) blocks.

# Verify new size
[root@server ~]# df -h /data
Filesystem                      Size  Used Avail Use% Mounted on
/dev/mapper/vg_storage-lv_data   30G   33M   30G   1% /data

# SHORTCUT: Extend LV and filesystem in one command!
[root@server ~]# lvextend -r -L +10G /dev/vg_storage/lv_data
  Size of logical volume vg_storage/lv_data changed from 30.00 GiB to 40.00 GiB.
  Logical volume vg_storage/lv_data successfully resized.
meta-data=/dev/mapper/vg_storage-lv_data ...
data blocks changed from 7864320 to 10485760
Use -r option! lvextend -r automatically resizes the filesystem. One command does both steps.

Reducing Logical Volumes

⚠ Danger Zone: Shrinking is risky and not always possible. XFS cannot be shrunk. ext4 requires unmounting. Always backup first!
# Check filesystem type first
[root@server ~]# df -Th /logs
Filesystem                     Type  Size  Used Avail Use% Mounted on
/dev/mapper/vg_storage-lv_logs ext4   30G   5G   23G  18% /logs

# For ext4 ONLY - unmount first!
[root@server ~]# umount /logs

# Check filesystem
[root@server ~]# e2fsck -f /dev/vg_storage/lv_logs

# Reduce both filesystem and LV in one command
[root@server ~]# lvreduce -r -L 15G /dev/vg_storage/lv_logs
Do you want to unmount "/logs"? [Y|n]: y
fsck from util-linux 2.37.4
...
resize2fs ...
  Size of logical volume vg_storage/lv_logs changed from 30.00 GiB to 15.00 GiB.

# Remount
[root@server ~]# mount /dev/vg_storage/lv_logs /logs
XFS cannot shrink! If you need to shrink XFS, you must backup, delete the LV, create smaller LV, create new XFS, restore data.

Removing LVM Components

# Remove in reverse order: LV → VG → PV

# 1. Unmount the filesystem
[root@server ~]# umount /data

# 2. Remove the logical volume
[root@server ~]# lvremove /dev/vg_storage/lv_data
Do you really want to remove active logical volume vg_storage/lv_data? [y/n]: y
  Logical volume "lv_data" successfully removed

# 3. Remove volume group (must be empty - no LVs)
[root@server ~]# vgremove vg_storage
  Volume group "vg_storage" successfully removed

# 4. Remove physical volumes
[root@server ~]# pvremove /dev/sdb /dev/sdc
  Labels on physical volume "/dev/sdb" successfully wiped.
  Labels on physical volume "/dev/sdc" successfully wiped.

# Remove PV from VG (without deleting VG) - moves data first
[root@server ~]# pvmove /dev/sdb          # Move data off this PV
[root@server ~]# vgreduce vg_storage /dev/sdb  # Remove from VG
[root@server ~]# pvremove /dev/sdb        # Remove PV
Order matters! Remove LVs before VG, VG before PVs. Removing components with data destroys that data.

LVM Workflow

1

Create Physical Volumes: pvcreate /dev/sdb /dev/sdc

2

Create Volume Group: vgcreate vg_data /dev/sdb /dev/sdc

3

Create Logical Volume: lvcreate -n lv_files -L 50G vg_data

4

Create Filesystem: mkfs.xfs /dev/vg_data/lv_files

5

Mount: mkdir /files && mount /dev/vg_data/lv_files /files

6

Persist in fstab: /dev/mapper/vg_data-lv_files /files xfs defaults 0 0

Extend Workflow:
pvcreatevgextendlvextend -rDone!

Troubleshooting

# LV not visible? Scan for volume groups
[root@server ~]# vgscan
[root@server ~]# vgchange -ay    # Activate all VGs

# Check if VG is active
[root@server ~]# vgs -o +lv_active

# Device mapper not creating links?
[root@server ~]# dmsetup ls
[root@server ~]# udevadm trigger

# Check for errors
[root@server ~]# dmesg | grep -i lvm
[root@server ~]# journalctl -u lvm2-lvmetad

# Filesystem shows old size after lvextend?
# You forgot to resize the filesystem!
[root@server ~]# xfs_growfs /mountpoint   # For XFS
[root@server ~]# resize2fs /dev/vg/lv     # For ext4

# Can't extend - no free space in VG
[root@server ~]# vgs     # Check VFree column
# Solution: vgextend to add more PVs

# LV won't activate - missing PV
[root@server ~]# vgchange -ay --partial vg_name
Most common issue: Extended LV but forgot to resize filesystem. Always use lvextend -r to do both!

Best Practices

✓ Do

  • Use descriptive VG and LV names
  • Leave some free space in VGs for growth
  • Use lvextend -r to resize both
  • Use mapper paths or UUIDs in fstab
  • Document your LVM layout
  • Test fstab changes with mount -a
  • Backup before any shrink operations
  • Use XFS for grow-only workloads

✗ Don't

  • Use all VG space immediately
  • Forget to resize filesystem after lvextend
  • Try to shrink XFS (impossible)
  • Shrink LV before shrinking filesystem
  • Remove PVs with data without pvmove
  • Ignore fstab after LVM changes
  • Use /dev/dm-X paths (not stable)
  • Skip verification after changes
Golden rule: Leave 10-20% free space in volume groups. It costs nothing and saves emergency late-night resizing.

Key Takeaways

1

Architecture: Physical Volumes → Volume Groups → Logical Volumes → Filesystems. Layers provide flexibility.

2

Create: pvcreate, vgcreate, lvcreate. Then mkfs and mount. Add to fstab for persistence.

3

Extend: pvcreate new disk → vgextend → lvextend -r. No downtime, filesystem grows online.

4

Key commands: pvs/vgs/lvs for status. pvdisplay/vgdisplay/lvdisplay for details. lvextend -r for safe resize.

LAB EXERCISES

  • Create physical volumes from available disks
  • Create a volume group and verify with vgdisplay
  • Create a logical volume, format with XFS, mount and add to fstab
  • Extend the VG by adding another physical volume
  • Extend the LV and filesystem with lvextend -r
  • Create a swap LV and configure in fstab

Next: Managing Layered Storage with Stratis