Linux Club · Storage & LVM Series

From vSphere Disk to
Permanent Linux Mount
— The Full LVM Workflow

A step-by-step walkthrough: attaching a disk in vSphere Client, creating a Physical Volume, Volume Group, and Logical Volume, formatting, mounting, and making it survive every reboot.

One of the most practical skills a Linux admin needs is managing storage — and LVM (Logical Volume Manager) is the backbone of production disk management. This weekend I walked my mentees through the entire flow, from clicking "Add Hard Disk" in vSphere all the way to a persistent mount. Here it is, documented end to end.

Whether you are running VMs in vSphere, Azure, or bare metal — the Linux side of this workflow is identical. The only thing that changes is how the disk gets attached. Once the OS sees the block device, LVM takes over and the rest is the same everywhere.

STEP 01 Add the Disk in vSphere Client

In vSphere Client, navigate to your VM → Actions → Edit Settings → Add New Device → Hard Disk. Set your disk size, choose Thick Provision Eager Zeroed for production (best performance, no lazy zeroing surprises), and click OK. No reboot needed — vSphere hot-adds the disk.

💡 Pro Tip

After adding the disk in vSphere, tell the Linux kernel to scan for new devices without rebooting. This is essential — the OS won't see the new disk until it rescans the SCSI bus.

bash — hot-add disk detection
# Trigger SCSI rescan — run on the VM after adding disk in vSphere
for host in /sys/class/scsi_host/host*; do
  echo "- - -" > $host/scan
done

# Confirm the new disk appeared
lsblk
# You should now see /dev/sdb (or sdc, sdd etc.)
STEP 02 Wipe Any Existing Signatures

Before LVM touches the disk, clear any existing partition table or filesystem signatures. vSphere and Azure sometimes pre-stamp disks. Skipping this step causes "device is partitioned" errors from pvcreate.

bash — wipe disk signatures
# Wipe any existing partition table or filesystem signature
wipefs -a /dev/sdb

# Confirm it's clean — should show no output
wipefs /dev/sdb
⚠ Important

Never run wipefs on a disk that has data you want to keep. This operation is destructive and immediate. Always confirm the device name with lsblk first.

STEP 03 Install LVM Tools
bash — Debian / Ubuntu
apt install -y lvm2

# Verify pvcreate is available
pvcreate --version
STEP 04 Create the Physical Volume (PV)

This is the first LVM step. pvcreate stamps LVM metadata directly onto the raw block device. No partition table needed — since this disk is dedicated entirely to LVM, we write directly to /dev/sdb, not /dev/sdb1.

bash — physical volume
# Initialise the disk as an LVM Physical Volume
pvcreate /dev/sdb

# Confirm — should show /dev/sdb with size and free space
pvs
# or for more detail:
pvdisplay /dev/sdb
STEP 05 Create the Volume Group (VG)

A Volume Group is a pool of storage built from one or more Physical Volumes. Think of it as the raw storage container that LVM manages. You can add more PVs to a VG later to grow it online.

bash — volume group
# Create a Volume Group named vg_data from /dev/sdb
vgcreate vg_data /dev/sdb

# Confirm — shows VG name, size, free space
vgs
# or for more detail:
vgdisplay vg_data
STEP 06 Create the Logical Volume (LV)

A Logical Volume is carved from the Volume Group. This is the unit you actually format and mount. Using -l 100%FREE allocates every available extent in the VG to this LV — no wasted space.

bash — logical volume
# Create LV using 100% of available VG space
lvcreate -l 100%FREE -n lv_data vg_data

# Confirm — shows LV name, size, VG it belongs to
lvs
# or for more detail:
lvdisplay /dev/vg_data/lv_data
💡 Naming Convention

Name your VGs and LVs descriptively — vg_webcontent, lv_webcontent tells you immediately what lives there. In production with 10+ volumes, names like vg1 and lv1 will cause confusion at 2am.

STEP 07 Format the Logical Volume

Now put a filesystem on the LV. The filesystem goes on the Logical Volume — never on the raw disk or PV. ext4 is the safe default for most workloads on Debian/RHEL systems.

bash — format
# Format the LV with ext4 filesystem
mkfs.ext4 /dev/vg_data/lv_data

# You will see output like:
# mke2fs 1.47.0 (5-Feb-2023)
# Creating filesystem with ... blocks and ... inodes
# Writing superblocks and filesystem accounting information: done
STEP 08 Create the Mount Point

A mount point is simply an empty directory where the filesystem will be attached. It must exist before you can mount anything to it. The -p flag creates parent directories silently if they don't exist and doesn't error if the directory already exists.

bash — mount point
# Create the directory that will serve as the mount point
mkdir -p /mnt/data

# Confirm it exists
ls -la /mnt/
STEP 09 Mount the Logical Volume
bash — mount
# Mount the LV to the mount point
mount /dev/vg_data/lv_data /mnt/data

# Confirm it is mounted and shows correct size
df -h /mnt/data
STEP 10 Make It Permanent with /etc/fstab

A manual mount disappears on the next reboot. To make it permanent, add it to /etc/fstab. The nofail option is critical in virtualised environments — it tells the system to continue booting even if this disk is unavailable, instead of dropping into emergency mode.

bash — persist in fstab
# Append the entry to /etc/fstab
echo '/dev/vg_data/lv_data  /mnt/data  ext4  defaults,nofail  0 2' >> /etc/fstab

# Verify the entry was written correctly
tail -1 /etc/fstab
STEP 11 Test fstab Before Rebooting

This is the step most tutorials skip — and it is arguably the most important. A bad fstab entry can make a VM completely unreachable after a reboot. Always test with umount + mount -a before trusting the entry.

bash — verify fstab
# Unmount what we just manually mounted
umount /mnt/data

# Mount everything in fstab that isn't already mounted
# This is exactly what happens at boot
mount -a

# Confirm /mnt/data came back correctly
df -h /mnt/data
# /dev/mapper/vg_data-lv_data  ...G  ...M  ...G  1%  /mnt/data  ✓
⚠ Why This Matters

If mount -a returns an error, fix the fstab entry immediately. In Azure or vSphere, a VM that hangs at boot due to a bad fstab entry requires recovery via serial console or disk detach. The nofail option is your safety net — but testing first is your real protection.


The Complete Flow at a Glance

vSphere Client
Add Hard Disk → hot-add to running VM → rescan SCSI bus → lsblk confirms /dev/sdb
Wipe
wipefs -a /dev/sdb — clears any pre-existing partition table or filesystem signature
Physical Volume
pvcreate /dev/sdb — stamps LVM metadata onto the raw disk. No partition needed.
Volume Group
vgcreate vg_data /dev/sdb — creates the storage pool. Can span multiple PVs later.
Logical Volume
lvcreate -l 100%FREE -n lv_data vg_data — carves the usable volume from the VG.
Format + Mount Point
mkfs.ext4 /dev/vg_data/lv_data then mkdir -p /mnt/data — filesystem on the LV, directory to attach it to.
Mount + Persist + Test
mount → write /etc/fstabumountmount -adf -h confirms it survives reboot.

LVM gives you something raw partitions never could — flexibility after the fact. Need more space? Extend the VG by adding another disk, then extend the LV and resize the filesystem online. No downtime, no data migration, no drama. That's the power of the abstraction layer LVM sits between your physical disks and your filesystem.

If you are preparing for a Linux admin role or a DevOps position — master LVM. It comes up in every vSphere, Azure, and bare metal environment you will ever work in.

This weekend session was part of our ongoing Linux Club mentorship programme. If you want to be part of the next cohort, drop a comment or send me a message. We cover real production scenarios — not just theory.

#Linux #LVM #vSphere #DevOps #LinuxAdmin #SysAdmin #StorageManagement #LinuxClub #Mentorship #Debian #CloudInfrastructure #OpenSource