When you install Proxmox VE 9 with the default settings, the installer automatically partitions the disk and creates two primary storage areas:
local
– a directory at/var/lib/vz
used for ISOs, templates, backups, and snippets.local-lvm
– an LVM-thin pool calleddata
inside thepve
volume group, used for VM and container disks.
By default, Proxmox assigns most of the disk space to local-lvm
. This makes sense if your primary use case is running virtual machines and containers. However, many administrators prefer to reduce the size of local-lvm
in order to:
- Allocate more space to
local
for storing ISOs, backups, and templates. - Repartition the system for ZFS pools, Ceph OSDs, or other custom storage.
- Keep VM disks on a different backend and use the freed-up space for OS and application data.
In this blog, we’ll go through safe methods to reduce local-lvm
size on a Proxmox VE 9 default installation.
Step 1: Understand the Default Layout
After a fresh Proxmox VE 9 installation, your LVM setup typically looks like this:
lsblk
Output (example):
sda
├─sda1 EFI
├─sda2 boot
└─sda3 LVM (pve)
├─pve-root /
├─pve-swap swap
└─pve-data thin pool (local-lvm)
The pve-data
thin pool is the local-lvm
storage. By default, it takes up most of the free space in the volume group.
Step 2: Backup Before Making Changes
Warning: Reducing LVM storage is destructive if not done properly. Always:
- Backup your Proxmox configuration and VMs.
- Move or delete any existing VMs/containers from
local-lvm
. - Move templates/ISOs/backups to external storage.
You can back up all VMs to another storage target with:
vzdump <VMID> --storage <backup-target>
Step 3: Remove the local-lvm
Storage Entry
- From the Proxmox GUI:
- Go to Datacenter → Storage.
- Select
local-lvm
. - Click Remove.
- Alternatively, edit the config file:
nano /etc/pve/storage.cfg
Remove the section:
lvmthin: local-lvm thinpool data vgname pve content rootdir,images
Step 4: Delete the Thin Pool
Once the storage entry is removed, you can safely delete the thin pool:
lvremove pve/data
If it’s split into metadata and data LVs (rare case):
lvremove pve/data_tmeta
lvremove pve/data_tdata
Step 5: Reclaim the Free Space
Now, the space previously used by local-lvm
is free inside the pve
volume group. You can:
Option A: Grow the root
Partition
If you want more space for /
and /var/lib/vz
:
lvextend -l +100%FREE pve/root
resize2fs /dev/pve/root
Option B: Create a New Volume for ISOs/Backups
You can create a dedicated LV for local
storage:
lvcreate -L 200G -n vz pve
mkfs.ext4 /dev/pve/vz
mkdir /vz
mount /dev/pve/vz /vz
Then add it to Proxmox storage config:
nano /etc/pve/storage.cfg
Add:
dir: local-ext
path /vz
content iso,backup,vztmpl
Option C: Use the Freed Space for ZFS/Ceph
Instead of allocating to root, you can leave the space free for creating ZFS pools, Ceph OSDs, or another storage backend later.
Step 6: Verify Changes
Check the updated layout:
lvs
vgs
Confirm that pve/data
no longer exists and that the freed space has been reassigned.
Best Practices
- Plan storage before installation: Use the advanced installer options to customize partition sizes and avoid post-install changes.
- Avoid mixing VM disks and backups on the same volume — separate storage ensures performance and reliability.
- Use ZFS or Ceph for production workloads instead of relying solely on
local-lvm
. - Document changes so future administrators know why
local-lvm
was removed.
Conclusion
On a Proxmox VE 9 default install, local-lvm
consumes most of the available disk space. While convenient for running VMs quickly, many administrators prefer to shrink or remove local-lvm
in favor of more flexible storage layouts.
By carefully removing the thin pool and reallocating space, you can better align Proxmox with your long-term infrastructure goals — whether that means larger ISO/backup storage, dedicated ZFS pools, or preparing for a Ceph cluster.