A step-by-step walkthrough: attaching a disk in vSphere Client, creating a Physical Volume, Volume Group, and Logical Volume, formatting, mounting, and making it survive every reboot.
One of the most practical skills a Linux admin needs is managing storage — and LVM (Logical Volume Manager) is the backbone of production disk management. This weekend I walked my mentees through the entire flow, from clicking "Add Hard Disk" in vSphere all the way to a persistent mount. Here it is, documented end to end.
Whether you are running VMs in vSphere, Azure, or bare metal — the Linux side of this workflow is identical. The only thing that changes is how the disk gets attached. Once the OS sees the block device, LVM takes over and the rest is the same everywhere.
In vSphere Client, navigate to your VM → Actions → Edit Settings → Add New Device → Hard Disk. Set your disk size, choose Thick Provision Eager Zeroed for production (best performance, no lazy zeroing surprises), and click OK. No reboot needed — vSphere hot-adds the disk.
After adding the disk in vSphere, tell the Linux kernel to scan for new devices without rebooting. This is essential — the OS won't see the new disk until it rescans the SCSI bus.
# Trigger SCSI rescan — run on the VM after adding disk in vSphere for host in /sys/class/scsi_host/host*; do echo "- - -" > $host/scan done # Confirm the new disk appeared lsblk # You should now see /dev/sdb (or sdc, sdd etc.)
Before LVM touches the disk, clear any existing partition table or filesystem signatures. vSphere and Azure sometimes pre-stamp disks. Skipping this step causes "device is partitioned" errors from pvcreate.
# Wipe any existing partition table or filesystem signature wipefs -a /dev/sdb # Confirm it's clean — should show no output wipefs /dev/sdb
Never run wipefs on a disk that has data you want to keep. This operation is destructive and immediate. Always confirm the device name with lsblk first.
apt install -y lvm2 # Verify pvcreate is available pvcreate --version
This is the first LVM step. pvcreate stamps LVM metadata directly onto the raw block device. No partition table needed — since this disk is dedicated entirely to LVM, we write directly to /dev/sdb, not /dev/sdb1.
# Initialise the disk as an LVM Physical Volume pvcreate /dev/sdb # Confirm — should show /dev/sdb with size and free space pvs # or for more detail: pvdisplay /dev/sdb
A Volume Group is a pool of storage built from one or more Physical Volumes. Think of it as the raw storage container that LVM manages. You can add more PVs to a VG later to grow it online.
# Create a Volume Group named vg_data from /dev/sdb vgcreate vg_data /dev/sdb # Confirm — shows VG name, size, free space vgs # or for more detail: vgdisplay vg_data
A Logical Volume is carved from the Volume Group. This is the unit you actually format and mount. Using -l 100%FREE allocates every available extent in the VG to this LV — no wasted space.
# Create LV using 100% of available VG space lvcreate -l 100%FREE -n lv_data vg_data # Confirm — shows LV name, size, VG it belongs to lvs # or for more detail: lvdisplay /dev/vg_data/lv_data
Name your VGs and LVs descriptively — vg_webcontent, lv_webcontent tells you immediately what lives there. In production with 10+ volumes, names like vg1 and lv1 will cause confusion at 2am.
Now put a filesystem on the LV. The filesystem goes on the Logical Volume — never on the raw disk or PV. ext4 is the safe default for most workloads on Debian/RHEL systems.
# Format the LV with ext4 filesystem mkfs.ext4 /dev/vg_data/lv_data # You will see output like: # mke2fs 1.47.0 (5-Feb-2023) # Creating filesystem with ... blocks and ... inodes # Writing superblocks and filesystem accounting information: done
A mount point is simply an empty directory where the filesystem will be attached. It must exist before you can mount anything to it. The -p flag creates parent directories silently if they don't exist and doesn't error if the directory already exists.
# Create the directory that will serve as the mount point mkdir -p /mnt/data # Confirm it exists ls -la /mnt/
# Mount the LV to the mount point mount /dev/vg_data/lv_data /mnt/data # Confirm it is mounted and shows correct size df -h /mnt/data
A manual mount disappears on the next reboot. To make it permanent, add it to /etc/fstab. The nofail option is critical in virtualised environments — it tells the system to continue booting even if this disk is unavailable, instead of dropping into emergency mode.
# Append the entry to /etc/fstab echo '/dev/vg_data/lv_data /mnt/data ext4 defaults,nofail 0 2' >> /etc/fstab # Verify the entry was written correctly tail -1 /etc/fstab
This is the step most tutorials skip — and it is arguably the most important. A bad fstab entry can make a VM completely unreachable after a reboot. Always test with umount + mount -a before trusting the entry.
# Unmount what we just manually mounted umount /mnt/data # Mount everything in fstab that isn't already mounted # This is exactly what happens at boot mount -a # Confirm /mnt/data came back correctly df -h /mnt/data # /dev/mapper/vg_data-lv_data ...G ...M ...G 1% /mnt/data ✓
If mount -a returns an error, fix the fstab entry immediately. In Azure or vSphere, a VM that hangs at boot due to a bad fstab entry requires recovery via serial console or disk detach. The nofail option is your safety net — but testing first is your real protection.
The Complete Flow at a Glance
lsblk confirms /dev/sdbwipefs -a /dev/sdb — clears any pre-existing partition table or filesystem signaturepvcreate /dev/sdb — stamps LVM metadata onto the raw disk. No partition needed.vgcreate vg_data /dev/sdb — creates the storage pool. Can span multiple PVs later.lvcreate -l 100%FREE -n lv_data vg_data — carves the usable volume from the VG.mkfs.ext4 /dev/vg_data/lv_data then mkdir -p /mnt/data — filesystem on the LV, directory to attach it to.mount → write /etc/fstab → umount → mount -a → df -h confirms it survives reboot.LVM gives you something raw partitions never could — flexibility after the fact. Need more space? Extend the VG by adding another disk, then extend the LV and resize the filesystem online. No downtime, no data migration, no drama. That's the power of the abstraction layer LVM sits between your physical disks and your filesystem.
If you are preparing for a Linux admin role or a DevOps position — master LVM. It comes up in every vSphere, Azure, and bare metal environment you will ever work in.
This weekend session was part of our ongoing Linux Club mentorship programme. If you want to be part of the next cohort, drop a comment or send me a message. We cover real production scenarios — not just theory.