Proxmox VE 9 continues to redefine open-source virtualization by integrating ZFS 2.3.3, the latest and most powerful version of the Zettabyte File System. Whether you’re building a small testing environment or an enterprise virtualization cluster, ZFS provides the foundation for reliable, redundant, and high-performance storage.
In this post, we’ll explore all ZFS RAID configurations available in Proxmox VE 9, including the newly supported dRAID, dRAID2, and dRAID3 layouts.
We’ll cover how they work, where to use them, and include practical examples and recommendations for production use.
Why Choose ZFS in Proxmox VE 9
ZFS isn’t just a file system — it’s a complete storage platform that includes:
Software RAID management
Snapshots and clones
Compression and deduplication
End-to-end checksumming
Data replication
Self-healing capabilities
Because ZFS is integrated directly into Proxmox VE, you can configure it during installation or through the GUI — no extra tools or plugins required.
All ZFS RAID Levels Available in Proxmox VE 9
Proxmox VE 9 (based on Debian 13 with OpenZFS 2.3.3) supports the following RAID configurations:
1. RAID0 (Striping)
Description:
Distributes data evenly across all drives with no redundancy. Highest speed, lowest reliability.
Command Example:
Use Case:
Benchmarking, testing, temporary workloads
Not for production
2. Mirror (Equivalent to RAID1)
Description:
Duplicates data across two or more disks. Survives one disk failure per pair.
Command Example:
Use Case:
High IOPS workloads, virtual machines, databases
Fastest resilver/rebuild
50% usable space
3. RAID10 (Striped Mirrors)
Description:
Combines mirrors and striping for speed and redundancy.
Requires at least 4 disks (two mirrors).
Command Example:
Use Case:
Excellent for production VM clusters
Balances redundancy and performance
4. RAIDZ1 (Single Parity)
Description:
ZFS equivalent of RAID 5. One disk for parity; can survive one failure.
Command Example:
Use Case:
Backup or low-IO storage
Not ideal for large disks (>10 TB)
5. RAIDZ2 (Double Parity)
Description:
Like RAID 6 — two disks for parity, survives two simultaneous failures.
Command Example:
Use Case:
Recommended for Proxmox production clusters
Excellent balance of redundancy and efficiency
6. RAIDZ3 (Triple Parity)
Description:
Three disks’ worth of parity, protects against three simultaneous disk failures.
Command Example:
Use Case:
Mission-critical archival or backup storage
Very large disk arrays
7. dRAID, dRAID2, and dRAID3 (Declustered RAID)
Description:
Newly introduced in ZFS 2.x, and available in Proxmox VE 9 via OpenZFS 2.3.3.
Declustered RAID distributes both data and parity across all disks in a way that accelerates rebuilds (resilvering) and improves space efficiency.
Variants:
dRAID (Single Parity) — like RAIDZ1
dRAID2 (Double Parity) — like RAIDZ2
dRAID3 (Triple Parity) — like RAIDZ3
Example Command:
Where:
2d
→ Two parity disks (like RAIDZ2)8c
→ Eight data disks1s
→ One spare disk
Use Case:
Large pools with many disks (≥12)
Environments needing faster rebuilds and better balancing
Enterprise backup and archive nodes
Pros:
Faster resilvering after disk replacement
Uniform I/O distribution
Great for modern high-density storage
Cons:
More complex to design and tune
Limited GUI support (CLI setup recommended)
ZFS RAID Comparison Table (Including dRAID)
RAID Type | Parity / Redundancy | Disk Failure Tolerance | Storage Efficiency | Performance | Recommended Use |
---|---|---|---|---|---|
RAID0 (Stripe) | None | 0 | 100% | Fastest | Testing only |
Mirror (RAID1) | Copy | 1 per pair | 50% | Excellent | VMs, databases |
RAID10 (Striped Mirrors) | Copy + Stripe | 1 per pair | 50% | Excellent | Mixed VM workloads |
RAIDZ1 | Single parity | 1 | ~80% | Moderate | Backup or light workloads |
RAIDZ2 | Double parity | 2 | ~66% | Good | Production and storage nodes |
RAIDZ3 | Triple parity | 3 | ~60% | Stable | Archival / critical storage |
dRAID (Declustered Single Parity) | Single parity | 1 | ~80–85% | Good | Large arrays, faster rebuild |
dRAID2 (Declustered Double Parity) | Double parity | 2 | ~66–75% | Good | Enterprise clusters |
dRAID3 (Declustered Triple Parity) | Triple parity | 3 | ~60–70% | Moderate | Mission-critical, large nodes |
Which RAID Type Should You Choose for Proxmox VE 9?
Use Case | Recommended ZFS RAID |
---|---|
High-performance VM workloads | Mirror or RAID10 |
Balanced redundancy and capacity | RAIDZ2 |
Mission-critical data or backups | RAIDZ3 or dRAID3 |
Large storage arrays (12+ disks) | dRAID2 or dRAID3 |
Backup nodes or archive servers | RAIDZ2 / RAIDZ3 |
Temporary workloads / testing | RAID0 |
Best Practices for ZFS on Proxmox VE 9
Always use ECC RAM for maximum data protection.
Prefer HBA/JBOD mode (no hardware RAID controller).
Run regular scrubs:
Enable LZ4 compression by default:
Add SLOG (ZIL) and L2ARC devices for performance gains:
New in Proxmox VE 9 — RAIDZ Expansion
Proxmox VE 9 with ZFS 2.3.3 introduces RAIDZ expansion, allowing administrators to add disks to existing RAIDZ vdevs. This long-awaited feature simplifies capacity scaling — especially useful for production clusters that can’t afford downtime for pool rebuilds.
Conclusion
ZFS in Proxmox VE 9 gives you the freedom to build enterprise-grade storage on your own terms.
Whether you prefer the simplicity of mirrors, the resilience of RAIDZ, or the scalability of the new dRAID models, Proxmox VE 9 supports it all — with no vendor lock-in.
Recommended defaults for most environments:
RAIDZ2 — best for general-purpose production clusters
Mirror or RAID10 — best for high-IOPS VM workloads
dRAID2/dRAID3 — best for large-scale, enterprise storage arrays
With Proxmox VE 9 and OpenZFS 2.3, you get a future-ready virtualization stack that rivals — and often exceeds — commercial hyperconverged systems in performance, reliability, and flexibility.