When setting up Proxmox VE 9, one of the first things administrators encounter is storage configuration. Storage is at the heart of virtualization: it determines where your VM disks, ISO images, templates, and backups reside. Proxmox provides a flexible storage model that allows you to integrate local disks, network-based storage, and advanced storage backends. To help new users get started quickly, Proxmox VE comes with a default storage setup after installation.

In this article, we’ll explore how default storage works in Proxmox VE 9, what types of storage are preconfigured, and how you can extend or optimize them for production workloads.


The Proxmox VE 9 Storage Model

Proxmox VE uses a unified storage framework that can manage different types of storage through a storage configuration file (/etc/pve/storage.cfg). Storage in Proxmox is not tied to a single hypervisor node; instead, it’s cluster-aware, allowing you to manage shared and local storage across nodes.

Each storage entry defines:

  • ID – a unique name for the storage.
  • Backend type – local directory, ZFS, LVM, Ceph, NFS, iSCSI, etc.
  • Content types – VM disks, container volumes, ISO images, templates, backups, etc.
  • Node assignment – which nodes in the cluster can access this storage.

Default Storage in Proxmox VE 9

After a fresh installation of Proxmox VE 9, you’ll typically see two default storage entries already configured:

1. local – Directory-Based Storage

  • Backend type: dir
  • Path: /var/lib/vz
  • Content types: ISO images, container templates, backups, snippets.
  • Scope: Local to the node only.

This is the general-purpose storage that Proxmox creates automatically. It is useful for:

  • Uploading ISOs for OS installations.
  • Storing LXC container templates.
  • Keeping backup files (VZDump).
  • Holding snippets (small scripts/configurations).

Note: It’s not intended for VM disk images by default, although you can enable it for that. Typically, VM disks should be placed on more robust storage like LVM, ZFS, or network-attached storage.


2. local-lvm – LVM-Thin Storage

  • Backend type: lvmthin
  • Path: Uses the pve volume group, created during installation.
  • Content types: Virtual machine disks and container root volumes.
  • Scope: Local to the node.

This storage is optimized for:

  • Hosting VM virtual disks (.qcow2, .raw).
  • Hosting LXC root filesystems.
  • Supporting thin provisioning (efficient disk space usage).

Proxmox creates a thin pool named data inside the pve volume group. This is where all VM and container volumes go by default. Thin provisioning means you can create disks larger than your physical storage, and space will only be used as needed.


Why These Defaults?

Proxmox VE sets up local and local-lvm because they:

  • Allow immediate deployment of VMs and containers without additional configuration.
  • Separate general storage (ISO, templates, backups) from virtual disks.
  • Provide a lightweight, flexible setup suitable for testing, labs, or small deployments.

 

 

 

 


Limitations of Default Storage

While the defaults are convenient, they may not fit production requirements:

  • No redundancy – both local and local-lvm are single-node, non-redundant storages. If the node fails, VMs on that storage are unavailable.
  • Scalability issues – local storage doesn’t scale across nodes in a cluster. Migration requires shared storage.
  • Backups consume local disk – storing backups on the same disk as VM volumes can lead to space conflicts.

Best Practices for Using Default Storage

  1. Keep ISOs and templates on local – since these are small and non-critical.
  2. Use local-lvm only for non-critical workloads – avoid running production VMs if you don’t have redundancy.
  3. Offload backups – configure additional storage (e.g., NFS or CIFS share) dedicated to backups.
  4. Expand storage when needed – add ZFS, Ceph, or shared storage for high-availability clusters.

Extending Beyond the Defaults

Once you’re comfortable with Proxmox VE 9, you’ll likely want to expand storage:

  • Add a second disk for backups and configure it as dir storage.
  • Use ZFS for built-in redundancy and snapshots.
  • Deploy Ceph for distributed, highly available storage in a cluster.
  • Integrate NFS/SMB shares for central ISO repositories and backup locations.

Conclusion

The default storage configuration in Proxmox VE 9local (directory) and local-lvm (LVM-thin) – provides a quick and practical way to start deploying virtual machines and containers immediately after installation. While these defaults are convenient for labs and small environments, production workloads typically require additional storage backends with redundancy and scalability.

By understanding how the default storage works, you can make informed decisions on when to rely on it and when to expand to more advanced storage solutions.