RED HAT ENTERPRISE LINUX
Logical Volume Manager
Flexible Storage with LVM
College-Level Course Module | RHEL System Administration
Welcome to this comprehensive module on Logical Volume Manager, or LVM. LVM is one of the most powerful storage technologies in Linux, providing flexibility that traditional partitions cannot match. With regular partitions, you're locked into fixed sizes - resizing means backing up data, deleting partitions, recreating them, and restoring. LVM abstracts storage into logical layers, allowing you to resize volumes on the fly, span multiple physical disks, add storage without downtime, and even take snapshots of your data. RHEL uses LVM by default for root filesystems during installation, and understanding LVM is essential for the RHCSA exam and real-world administration. In this module, you'll learn the LVM architecture, create and manage physical volumes, volume groups, and logical volumes, and see how to extend storage when you need more space.
Learning Objectives
1
Understand LVM architecture and concepts
Physical volumes, volume groups, logical volumes, and physical extents
2
Create and manage LVM components
Use pvcreate, vgcreate, lvcreate to build LVM storage
3
Extend and resize logical volumes
Add space to volume groups and grow logical volumes and filesystems
4
Create filesystems and swap on LVM
Format logical volumes and configure persistent mounts
We have four main learning objectives. First, you'll understand LVM architecture - the layered structure of physical volumes, volume groups, and logical volumes. This conceptual foundation is essential for using LVM effectively. Second, you'll create and manage LVM components using the command-line tools: pvcreate for physical volumes, vgcreate for volume groups, and lvcreate for logical volumes. You'll build LVM storage from raw disks. Third, you'll learn LVM's killer feature: extending storage. When you need more space, you can add disks to volume groups and grow logical volumes without downtime. This is what makes LVM so valuable in production. Fourth, you'll create filesystems and swap on logical volumes and configure them in /etc/fstab. The end result is usable storage that's flexible and manageable.
Why LVM ?
LVM (Logical Volume Manager) provides a layer of abstraction between physical storage and filesystems, enabling flexible storage management.
Traditional Partitions
Fixed size at creation
Difficult to resize
Limited to single disk
Resize requires unmount
Complex to add space
LVM Volumes
Resize on the fly
Grow while mounted (XFS)
Span multiple disks
Add disks without downtime
Snapshots for backup
Real scenario: /var is full. With partitions: backup, unmount, resize partition, resize filesystem, restore. With LVM: lvextend, resize filesystem - done in seconds, no downtime.
Why use LVM when partitions work? The answer is flexibility. Traditional partitions are rigid. You decide the size at creation and changing it later is painful - often requiring backup, partition deletion, recreation, and restore. Growing means having adjacent free space, which rarely exists. LVM changes everything. Volumes can be resized dynamically. Need more space for /var? Extend the logical volume and grow the filesystem - often without even unmounting. XFS filesystems can grow while mounted and in use. LVM volumes can span multiple physical disks. Start with one disk, add more later. The volume group pools all the space, and logical volumes can use space from any disk in the pool. You can add physical disks to a running system, add them to a volume group, and extend logical volumes - all without downtime. Snapshots create point-in-time copies for backup without stopping applications. RHEL uses LVM by default because these benefits matter in production. Understanding LVM isn't optional for serious Linux administration.
LVM Architecture
Filesystems / Swap
/home (ext4)
/var (xfs)
swap
↑
Logical Volumes (LV)
lv_home 20GB
lv_var 15GB
lv_swap 4GB
↑
Volume Group (VG)
↑
Physical Volumes (PV)
/dev/sdb 30GB
/dev/sdc 20GB
↑
Physical Disks/Partitions
LVM has a layered architecture. Understanding these layers is key to using LVM effectively. At the bottom are physical disks or partitions - the actual hardware. These are the raw storage you start with. Physical Volumes (PVs) are disks or partitions initialized for LVM. The pvcreate command writes LVM metadata to the device, marking it as LVM storage. You can use whole disks or partitions as PVs. Volume Groups (VGs) pool physical volumes together. A VG combines the space from one or more PVs into a single storage pool. This is where LVM's flexibility comes from - the VG abstracts away the physical layout. Logical Volumes (LVs) are carved from volume groups. They're what you actually use - format them with filesystems, mount them, store data. LVs look like regular block devices to the rest of the system. In the diagram, two physical disks (30GB and 20GB) are combined into a 50GB volume group, which is divided into three logical volumes for /home, /var, and swap. The logical volumes don't care which physical disk their data is on - the VG handles that.
Physical Extents
LVM divides physical volumes into fixed-size chunks called Physical Extents (PE) . Logical volumes are allocated in units of extents.
Volume Group divided into Physical Extents (4MB each)
LV1
LV1
LV1
LV1
LV1
LV2
LV2
LV2
■ LV1 (5 PEs = 20MB)
■ LV2 (3 PEs = 12MB)
□ Free (4 PEs = 16MB)
[root@server ~]# vgdisplay vg_data | grep "PE Size"
PE Size 4.00 MiB
[root@server ~]# vgcreate -s 16M vg_large /dev/sdb
PE Size matters: Larger PEs reduce metadata but waste space with small LVs. Default 4MB works for most cases. For very large storage, consider 16MB or 32MB.
Physical Extents are the allocation units within LVM. When you create a volume group, all its physical volumes are divided into equal-sized chunks called Physical Extents. The default PE size is 4MB, which works well for most situations. When you create a logical volume, LVM allocates whole extents to it. A 100MB logical volume uses 25 PEs (at 4MB each). This is why LV sizes might be slightly adjusted - they're rounded to PE boundaries. The diagram shows a volume group's extents. Some are allocated to LV1, some to LV2, and some are free. Logical volumes don't need contiguous extents - LVM tracks which extents belong to which LV. PE size affects the maximum volume group size and allocation granularity. With 4MB PEs, you can't create a 2MB logical volume - the minimum is one PE. For very large storage systems (many terabytes), larger PE sizes reduce metadata overhead. The -s option to vgcreate sets PE size. Choose at creation time - it can't be changed later without recreating the VG.
LVM Device Naming
/dev/dm-0
/dev/dm-1
/dev/vgname/lvname # Traditional format
/dev/mapper/vgname-lvname # Device mapper format
/dev/vg_storage/lv_data
/dev/mapper/vg_storage-lv_data
[root@server ~]# ls -l /dev/vg_storage/
lrwxrwxrwx. 1 root root 7 Jan 20 10:00 lv_data -> ../dm-2
[root@server ~]# ls -l /dev/mapper/
lrwxrwxrwx. 1 root root 7 Jan 20 10:00 vg_storage-lv_data -> ../dm-2
[root@server ~]# mount /dev/vg_storage/lv_data /mnt/data
[root@server ~]# mount /dev/mapper/vg_storage-lv_data /mnt/data
Best practice: Use /dev/mapper/vgname-lvname format in /etc/fstab - it's consistent and clearly shows both VG and LV names.
LVM creates block devices for logical volumes with multiple access paths. Understanding the naming helps avoid confusion. The actual device is in /dev/dm-N (device mapper). These are the real block devices the kernel uses. But dm-0, dm-1 aren't meaningful - you can't tell which is which. Symbolic links provide friendly names. Two formats exist and both work. /dev/vgname/lvname is the traditional format - a directory per VG, a link per LV inside. /dev/mapper/vgname-lvname is the device mapper format - flat namespace with VG and LV separated by hyphen. Both link to the same dm-N device. You can use either format interchangeably. For /etc/fstab, the mapper format is often preferred because it's a flat namespace and clearly shows both VG and LV in one name. However, using UUID is even more reliable, as we'll see. lsblk shows the dm-N names but also shows the LV structure, making it easy to see what's what.
Viewing LVM Information
[root@server ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/sda2 rhel lvm2 a-- <99.00g 0
/dev/sdb vg_data lvm2 a-- 50.00g 10.00g
[root@server ~]# vgs
VG #PV #LV #SN Attr VSize VFree
rhel 1 2 0 wz--n- <99.00g 0
vg_data 1 2 0 wz--n- 50.00g 10.00g
[root@server ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta%
root rhel -wi-ao---- <95.00g
swap rhel -wi-ao---- 4.00g
lv_data vg_data -wi-a----- 40.00g
[root@server ~]# pvdisplay /dev/sdb
[root@server ~]# vgdisplay vg_data
[root@server ~]# lvdisplay /dev/vg_data/lv_data
[root@server ~]# lsblk
Command pattern: pvs/vgs/lvs for summary, pvdisplay/vgdisplay/lvdisplay for details. Same pattern: pv*, vg*, lv*.
LVM has consistent command patterns for viewing information. The short commands (pvs, vgs, lvs) show summaries in table format - great for quick overview. pvs shows physical volumes with their VG membership, size, and free space. vgs shows volume groups with PV count, LV count, total size, and free space. lvs shows logical volumes with their VG, attributes, and size. The display commands (pvdisplay, vgdisplay, lvdisplay) show detailed information about specific objects. Use these when you need full details like UUID, PE count, creation time. The pattern is consistent across LVM: pv* commands work on physical volumes, vg* on volume groups, lv* on logical volumes. This makes LVM easier to learn - once you know the pattern, you can guess commands. lsblk also shows LVM structure, displaying the relationship between physical devices, device mapper entries, and mount points. It's especially helpful for seeing the full picture.
Creating Physical Volumes
[root@server ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sdb 8:16 0 50G 0 disk
sdc 8:32 0 30G 0 disk
[root@server ~]# pvcreate /dev/sdb
Physical volume "/dev/sdb" successfully created.
[root@server ~]# pvcreate /dev/sdc
Physical volume "/dev/sdc" successfully created.
[root@server ~]# pvcreate /dev/sdb /dev/sdc
[root@server ~]# pvcreate /dev/sdd1
[root@server ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/sdb lvm2 --- 50.00g 50.00g
/dev/sdc lvm2 --- 30.00g 30.00g
Warning: pvcreate writes to the beginning of the device, destroying any existing data or partition table. Double-check the device name!
Creating physical volumes is the first step in building LVM storage. pvcreate initializes a disk or partition for use with LVM. It writes LVM metadata to the device, marking it as a physical volume. You can use whole disks or partitions. Whole disks are simpler - no need to partition first. Partitions give you flexibility to use part of a disk for LVM and part for other purposes. If using partitions, setting type 8e (Linux LVM) is recommended but not strictly required. LVM works regardless, but the type helps other tools identify the partition's purpose. After pvcreate, pvs shows the new physical volumes. Note the VG column is empty - they're not yet assigned to a volume group. The PFree column shows all space is free. Critical warning: pvcreate destroys any existing data. It overwrites the beginning of the device where partition tables and filesystem signatures live. Always verify you're working with the correct device. A typo could wipe the wrong disk.
Creating Volume Groups
[root@server ~]# vgcreate vg_data /dev/sdb
Volume group "vg_data" successfully created
[root@server ~]# vgcreate vg_storage /dev/sdb /dev/sdc
Volume group "vg_storage" successfully created
[root@server ~]# vgcreate -s 16M vg_large /dev/sdb
[root@server ~]# vgs
VG #PV #LV #SN Attr VSize VFree
vg_storage 2 0 0 wz--n- 79.99g 79.99g
[root@server ~]# vgdisplay vg_storage
--- Volume group ---
VG Name vg_storage
System ID
Format lvm2
VG Size 79.99 GiB
PE Size 4.00 MiB
Total PE 20478
Alloc PE / Size 0 / 0
Free PE / Size 20478 / 79.99 GiB
VG UUID abc123-def4-5678-90ab-cdef12345678
Naming convention: Use descriptive VG names like vg_data, vg_mysql, vg_backup. The "vg_" prefix makes LVM objects easy to identify.
Volume groups pool physical volumes into a unified storage space. vgcreate creates a new volume group. Specify the VG name and one or more physical volumes to include. The VG name should be descriptive - you'll reference it when creating logical volumes. Common convention is vg_purpose or just a descriptive name. With multiple PVs, the VG combines their space. A VG with a 50GB and 30GB PV has 80GB total (minus small overhead for metadata). You can start with one PV and add more later. The -s option sets PE size. Default 4MB is fine for most uses. Consider larger PEs for very large storage to reduce metadata. vgdisplay shows detailed VG information including total size, PE size, total PE count, and how many are allocated vs free. This is useful for planning how to allocate space to logical volumes. The VG is now ready for creating logical volumes.
Creating Logical Volumes
[root@server ~]# lvcreate -n lv_data -L 20G vg_storage
Logical volume "lv_data" created.
[root@server ~]# lvcreate -n lv_logs -l 50%FREE vg_storage
Logical volume "lv_logs" created.
[root@server ~]# lvcreate -n lv_backup -l 100%FREE vg_storage
[root@server ~]# lvcreate -n lv_small -l 100 vg_storage
[root@server ~]# lvs
LV VG Attr LSize
lv_data vg_storage -wi-a----- 20.00g
lv_logs vg_storage -wi-a----- 29.99g
[root@server ~]# ls -l /dev/vg_storage/lv_data
lrwxrwxrwx. 1 root root 7 Jan 20 10:00 /dev/vg_storage/lv_data -> ../dm-2
Size options: -L for absolute size (20G, 500M), -l for extents or percentage (100, 50%FREE, 100%VG).
Logical volumes are the usable storage units. lvcreate carves LVs from volume groups. The -n option names the logical volume. Use descriptive names like lv_data, lv_mysql, lv_home. Convention is lv_purpose. Size can be specified two ways. -L (uppercase) takes absolute sizes with units: -L 20G for 20 gigabytes, -L 500M for 500 megabytes. -l (lowercase) takes extent counts or percentages: -l 100 for 100 physical extents, -l 50%FREE for half of remaining free space, -l 100%VG for the entire VG. The percentage options are very useful. 100%FREE uses all remaining space - great for the last LV in a VG. 50%FREE leaves room for expansion. After creation, the LV appears as a block device. It's ready to be formatted with a filesystem. The device paths are /dev/vgname/lvname and /dev/mapper/vgname-lvname. Note: the LV doesn't yet have a filesystem - it's just allocated space. The next step is mkfs.
Filesystems on LVM
[root@server ~]# mkfs.xfs /dev/vg_storage/lv_data
meta-data=/dev/vg_storage/lv_data isize=512 agcount=4, agsize=1310720 blks
data = bsize=4096 blocks=5242880, imaxpct=25
...
[root@server ~]# mkfs.ext4 /dev/vg_storage/lv_logs
[root@server ~]# mkdir /data
[root@server ~]# mount /dev/vg_storage/lv_data /data
[root@server ~]# df -h /data
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_storage-lv_data 20G 33M 20G 1% /data
[root@server ~]# blkid /dev/vg_storage/lv_data
/dev/vg_storage/lv_data: UUID="abc123-..." TYPE="xfs"
/dev/mapper/vg_storage-lv_data /data xfs defaults 0 0
UUID=abc123-... /data xfs defaults 0 0
With the logical volume created, you format it with a filesystem just like any block device. mkfs.xfs creates an XFS filesystem - RHEL's default and recommendation. mkfs.ext4 creates ext4 if you need shrink capability. After formatting, create a mount point and mount the filesystem. The df command verifies the mount and shows available space. For persistence across reboots, add to /etc/fstab. You have three options for the device field: /dev/vgname/lvname path, /dev/mapper/vgname-lvname path, or UUID. All work correctly. UUID is most reliable because it's unique and never changes. The mapper path is also reliable for LVM because the VG and LV names persist. Either is acceptable for LVM volumes. The key advantage of LVM becomes apparent when you need more space - you can extend the LV and grow the filesystem without changing fstab entries or mount points.
Swap on LVM
[root@server ~]# lvcreate -n lv_swap -L 4G vg_storage
Logical volume "lv_swap" created.
[root@server ~]# mkswap /dev/vg_storage/lv_swap
Setting up swapspace version 1, size = 4 GiB (4294963200 bytes)
no label, UUID=swap-uuid-1234-5678-90ab-cdef
[root@server ~]# swapon /dev/vg_storage/lv_swap
[root@server ~]# swapon --show
NAME TYPE SIZE USED PRIO
/dev/dm-1 partition 4G 0B -2
/dev/mapper/vg_storage-lv_swap partition 4G 0B -3
/dev/mapper/vg_storage-lv_swap none swap defaults 0 0
[root@server ~]# swapoff /dev/vg_storage/lv_swap
[root@server ~]# swapon -a
[root@server ~]# swapon --show
LVM swap advantage: Need more swap? Extend the LV, run mkswap again (after swapoff), and swapon. Or create additional swap LVs.
Swap on LVM works just like swap on regular partitions, but with LVM's flexibility benefits. Create a logical volume sized for your swap needs. Common sizes are 2-8GB for servers, or equal to RAM if you need hibernation support. mkswap formats the LV for swap use, and swapon activates it. Add to /etc/fstab for persistence using the mapper path or UUID. The mount point field is "none" since swap isn't mounted to a directory. The LVM advantage: if you need more swap, you can extend the LV (after deactivating swap), run mkswap again, and reactivate. Or just create another swap LV - Linux supports multiple swap areas. RHEL's default installation creates swap on LVM for exactly this flexibility. The swap logical volume can be resized along with other storage needs.
Extending Volume Groups
[root@server ~]# vgs vg_storage
VG #PV #LV #SN Attr VSize VFree
vg_storage 1 3 0 wz--n- 49.99g 5.99g
[root@server ~]# pvcreate /dev/sdd
Physical volume "/dev/sdd" successfully created.
[root@server ~]# vgextend vg_storage /dev/sdd
Volume group "vg_storage" successfully extended
[root@server ~]# vgs vg_storage
VG #PV #LV #SN Attr VSize VFree
vg_storage 2 3 0 wz--n- 99.99g 55.99g
[root@server ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/sdc vg_storage lvm2 a-- 49.99g 5.99g
/dev/sdd vg_storage lvm2 a-- 50.00g 50.00g
No downtime! vgextend adds space while the VG is in use. Logical volumes and filesystems continue running.
Extending volume groups is where LVM really shines. When you need more space, add a disk and extend the VG - no downtime required. First, prepare the new disk as a physical volume with pvcreate. Then vgextend adds it to an existing volume group. The VG's free space increases immediately. This works while the VG is in use. Mounted filesystems on LVs in this VG continue operating. There's no need to unmount, backup, or restart anything. In the example, vg_storage grew from 50GB to 100GB by adding a 50GB disk. The new free space (50GB) is available for extending existing LVs or creating new ones. This is the operational pattern for growing storage: identify a new disk, pvcreate it, vgextend to add it to the pool, then lvextend to allocate the space to specific logical volumes.
Extending Logical Volumes
[root@server ~]# lvs
LV VG Attr LSize
lv_data vg_storage -wi-ao---- 20.00g
[root@server ~]# vgs vg_storage
VG VSize VFree
vg_storage 99.99g 55.99g
[root@server ~]# lvextend -L +10G /dev/vg_storage/lv_data
Size of logical volume vg_storage/lv_data changed from 20.00 GiB to 30.00 GiB.
Logical volume vg_storage/lv_data successfully resized.
[root@server ~]# lvextend -L 50G /dev/vg_storage/lv_data
[root@server ~]# lvextend -l +100%FREE /dev/vg_storage/lv_data
[root@server ~]# lvextend -l +50%LV /dev/vg_storage/lv_data
[root@server ~]# lvs /dev/vg_storage/lv_data
LV VG Attr LSize
lv_data vg_storage -wi-ao---- 30.00g
Not done yet! lvextend only grows the LV. The filesystem still needs to be resized to use the new space.
lvextend grows a logical volume using free space from its volume group. Multiple sizing options exist. -L +SIZE adds that amount to the current size. +10G adds 10 gigabytes. -L SIZE (without +) sets the absolute size. -L 50G makes the LV exactly 50GB. -l +100%FREE uses all free space in the VG. Great when you want to allocate everything remaining. -l +50%LV adds 50% of the current LV size (useful for proportional growth). The LV can be extended while mounted and in use - no downtime required. However, extending the LV doesn't automatically extend the filesystem! The filesystem still sees the original size. You must also resize the filesystem to use the new space. This is a common mistake - extending the LV and wondering why df still shows the old size. The next slide covers resizing filesystems.
Resizing Filesystems
[root@server ~]# xfs_growfs /data
meta-data=/dev/mapper/vg_storage-lv_data isize=512 agcount=4
data blocks changed from 5242880 to 7864320
[root@server ~]# resize2fs /dev/vg_storage/lv_logs
Resizing the filesystem on /dev/vg_storage/lv_logs to 7864320 (4k) blocks.
[root@server ~]# df -h /data
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_storage-lv_data 30G 33M 30G 1% /data
[root@server ~]# lvextend -r -L +10G /dev/vg_storage/lv_data
Size of logical volume vg_storage/lv_data changed from 30.00 GiB to 40.00 GiB.
Logical volume vg_storage/lv_data successfully resized.
meta-data=/dev/mapper/vg_storage-lv_data ...
data blocks changed from 7864320 to 10485760
Use -r option! lvextend -r automatically resizes the filesystem. One command does both steps.
After extending the LV, you must resize the filesystem to use the new space. XFS and ext4 have different commands. For XFS: use xfs_growfs with the mount point (not the device). XFS can only grow, never shrink. It can grow while mounted and in use - no downtime. For ext4: use resize2fs with the device path. ext4 can both grow and shrink. Growing works online; shrinking requires unmounting first. The -r option to lvextend is the recommended approach. It automatically runs the appropriate resize command after extending the LV. One command, both operations, no chance of forgetting the filesystem resize. In the shortcut example, lvextend -r handles everything. It detects the filesystem type and runs xfs_growfs or resize2fs automatically. This is the best practice for extending LVM storage. The entire operation - extend VG if needed, extend LV, resize filesystem - can happen while the filesystem is mounted and in use. This is LVM's killer feature for production systems.
Reducing Logical Volumes
⚠ Danger Zone: Shrinking is risky and not always possible. XFS cannot be shrunk. ext4 requires unmounting. Always backup first!
[root@server ~]# df -Th /logs
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/vg_storage-lv_logs ext4 30G 5G 23G 18% /logs
[root@server ~]# umount /logs
[root@server ~]# e2fsck -f /dev/vg_storage/lv_logs
[root@server ~]# lvreduce -r -L 15G /dev/vg_storage/lv_logs
Do you want to unmount "/logs"? [Y|n]: y
fsck from util-linux 2.37.4
...
resize2fs ...
Size of logical volume vg_storage/lv_logs changed from 30.00 GiB to 15.00 GiB.
[root@server ~]# mount /dev/vg_storage/lv_logs /logs
XFS cannot shrink! If you need to shrink XFS, you must backup, delete the LV, create smaller LV, create new XFS, restore data.
Shrinking is the opposite of extending and is much more dangerous. Not all filesystems support it, and data loss is possible if done incorrectly. XFS cannot be shrunk at all. It's a grow-only filesystem. If you need less space on XFS, your only option is backup, delete, recreate smaller, restore. ext4 can shrink, but requires careful handling. The filesystem must be unmounted first - no online shrink. The -r option to lvreduce handles both filesystem and LV together, which is safer than doing them separately. If you shrink the LV before the filesystem, you'll corrupt data! The command runs e2fsck, resize2fs, then lvreduce in the correct order. Even with -r, always backup before shrinking. A power failure during the operation, a bug, or running out of space for the shrunk filesystem's data could cause data loss. Generally, avoid shrinking if possible. It's safer to leave space allocated or move data to a new, properly-sized volume.
Removing LVM Components
[root@server ~]# umount /data
[root@server ~]# lvremove /dev/vg_storage/lv_data
Do you really want to remove active logical volume vg_storage/lv_data? [y/n]: y
Logical volume "lv_data" successfully removed
[root@server ~]# vgremove vg_storage
Volume group "vg_storage" successfully removed
[root@server ~]# pvremove /dev/sdb /dev/sdc
Labels on physical volume "/dev/sdb" successfully wiped.
Labels on physical volume "/dev/sdc" successfully wiped.
[root@server ~]# pvmove /dev/sdb
[root@server ~]# vgreduce vg_storage /dev/sdb
[root@server ~]# pvremove /dev/sdb
Order matters! Remove LVs before VG, VG before PVs. Removing components with data destroys that data.
Removing LVM components is the reverse of creation: first remove logical volumes, then the volume group, then physical volumes. Order matters! Before removing an LV, unmount any filesystem on it. lvremove deletes the logical volume and all its data permanently. vgremove removes a volume group. The VG must be empty - all LVs must be removed first. pvremove clears LVM metadata from physical volumes, returning them to raw disk state. Sometimes you need to remove a disk from a VG without destroying everything. pvmove migrates data off a PV to other PVs in the same VG. Then vgreduce removes that PV from the VG. Finally pvremove cleans up the PV metadata. This is useful when replacing a failing disk or removing a disk you need elsewhere. Always double-check before removing. lvremove destroys data immediately with no undo. Update /etc/fstab to remove entries for removed filesystems, or the system may fail to boot.
LVM Workflow
1
Create Physical Volumes: pvcreate /dev/sdb /dev/sdc
2
Create Volume Group: vgcreate vg_data /dev/sdb /dev/sdc
3
Create Logical Volume: lvcreate -n lv_files -L 50G vg_data
4
Create Filesystem: mkfs.xfs /dev/vg_data/lv_files
5
Mount: mkdir /files && mount /dev/vg_data/lv_files /files
6
Persist in fstab: /dev/mapper/vg_data-lv_files /files xfs defaults 0 0
Extend Workflow:
pvcreate →
vgextend →
lvextend -r →
Done!
This workflow summarizes the complete process for creating LVM storage. Step 1: Initialize disks as physical volumes with pvcreate. This marks them for LVM use. Step 2: Create a volume group to pool the physical volumes. Choose a descriptive name. Step 3: Create logical volumes from the VG. Use -L for absolute size or -l for percentages. Step 4: Create a filesystem on the LV. Use mkfs.xfs for RHEL default or mkfs.ext4 if you need shrink capability. Step 5: Create mount point and mount the filesystem. Step 6: Add to /etc/fstab for persistence. Use the mapper path or UUID. The extend workflow is simpler: prepare new disk as PV, add to VG with vgextend, grow LV and filesystem with lvextend -r. No unmounting, no downtime, no backup/restore cycle. This flexibility is why LVM is the standard for Linux server storage.
Troubleshooting
[root@server ~]# vgscan
[root@server ~]# vgchange -ay
[root@server ~]# vgs -o +lv_active
[root@server ~]# dmsetup ls
[root@server ~]# udevadm trigger
[root@server ~]# dmesg | grep -i lvm
[root@server ~]# journalctl -u lvm2-lvmetad
[root@server ~]# xfs_growfs /mountpoint
[root@server ~]# resize2fs /dev/vg/lv
[root@server ~]# vgs
[root@server ~]# vgchange -ay --partial vg_name
Most common issue: Extended LV but forgot to resize filesystem. Always use lvextend -r to do both!
Common LVM issues and solutions. LV not visible: VGs might not be activated. vgscan searches for VGs, vgchange -ay activates them. This can happen after system recovery or when moving disks between systems. Device mapper issues: dmsetup ls shows device mapper devices. udevadm trigger regenerates device nodes. Sometimes a restart of lvm2 services helps. Filesystem size wrong: The most common issue! df shows old size after lvextend because the filesystem wasn't resized. Run xfs_growfs or resize2fs. Better: always use lvextend -r. No free space: lvextend fails if VG has no free extents. Check vgs output for VFree column. Solution is to add more physical volumes with vgextend. Missing PV: If a physical volume is missing (failed disk), the VG may not activate. vgchange -ay --partial activates with missing PVs, but data on the missing PV is inaccessible. This is for emergency recovery, not normal operation.
Best Practices
✓ Do
Use descriptive VG and LV names
Leave some free space in VGs for growth
Use lvextend -r to resize both
Use mapper paths or UUIDs in fstab
Document your LVM layout
Test fstab changes with mount -a
Backup before any shrink operations
Use XFS for grow-only workloads
✗ Don't
Use all VG space immediately
Forget to resize filesystem after lvextend
Try to shrink XFS (impossible)
Shrink LV before shrinking filesystem
Remove PVs with data without pvmove
Ignore fstab after LVM changes
Use /dev/dm-X paths (not stable)
Skip verification after changes
Golden rule: Leave 10-20% free space in volume groups. It costs nothing and saves emergency late-night resizing.
LVM best practices from real-world experience. Use descriptive names. vg_database and lv_mysql_data are much better than vg0 and lv0. You'll thank yourself when troubleshooting at 2 AM. Leave free space in VGs. Don't allocate 100% immediately. That 10-20% reserve lets you extend critical filesystems quickly when needed. It costs nothing to keep space available. Always use lvextend -r. It handles both LV and filesystem in one safe operation. Forgetting the filesystem resize is the most common LVM mistake. For fstab, use /dev/mapper/vgname-lvname or UUID. Don't use /dev/dm-X - those numbers can change. Document your layout. A simple text file listing VGs, LVs, sizes, and purposes helps future you (or your replacement). Never shrink LV before filesystem - you'll corrupt data. Use pvmove before removing PVs to relocate data safely. XFS can't shrink - plan accordingly or use ext4 if shrinking might be needed.
Key Takeaways
1
Architecture: Physical Volumes → Volume Groups → Logical Volumes → Filesystems. Layers provide flexibility.
2
Create: pvcreate, vgcreate, lvcreate. Then mkfs and mount. Add to fstab for persistence.
3
Extend: pvcreate new disk → vgextend → lvextend -r. No downtime, filesystem grows online.
4
Key commands: pvs/vgs/lvs for status. pvdisplay/vgdisplay/lvdisplay for details. lvextend -r for safe resize.
LAB EXERCISES
Create physical volumes from available disks
Create a volume group and verify with vgdisplay
Create a logical volume, format with XFS, mount and add to fstab
Extend the VG by adding another physical volume
Extend the LV and filesystem with lvextend -r
Create a swap LV and configure in fstab
Next: Managing Layered Storage with Stratis
Let's summarize the key LVM concepts and skills. The architecture is layered: physical volumes are pooled into volume groups, which are divided into logical volumes. This abstraction provides the flexibility that makes LVM valuable. For creating LVM storage, follow the workflow: pvcreate to initialize disks, vgcreate to build the pool, lvcreate to allocate space, mkfs to create filesystem, mount to use it, fstab to persist. For extending storage - LVM's killer feature - the workflow is: pvcreate new disk, vgextend to add it, lvextend -r to grow LV and filesystem together. This all works online without downtime. The command pattern is consistent: pv* for physical volumes, vg* for volume groups, lv* for logical volumes. Short commands (pvs, vgs, lvs) for summary, display commands for details. Your lab exercises give you hands-on practice with the complete lifecycle. LVM mastery is essential for RHCSA and for managing real-world Linux storage.