When tuning virtual machines in Proxmox VE for performance, one setting has a surprisingly large impact but is often misunderstood: CPU Type.

Inside every VM configuration, Proxmox lets you choose a virtual CPU model — such as:

  • host

  • x86-64-v2-AES

  • kvm64

  • qemu64

  • and others

At first glance, these look like compatibility options. In reality, this choice directly affects:

  • VM performance

  • instruction set availability

  • crypto acceleration

  • database speed

  • AI and analytics workloads

  • live migration compatibility

  • cluster design strategy

In this article, we’ll go deep into the two most commonly debated options:

CPU Type = host
vs
CPU Type = x86-64-v2-AES

and explain exactly when to use each.


What “CPU Type” Means in Proxmox

A virtual machine does not automatically see the physical CPU model. Instead, the hypervisor presents a virtual CPU profile to the guest OS.

This profile defines:

  • which CPU instructions are available

  • which extensions are exposed

  • what optimizations are allowed

  • how portable the VM is across hosts

Think of it like a CPU compatibility contract between Proxmox and the guest OS.

Different profiles trade off:

  • performance

  • compatibility

  • migration safety


CPU Type = host (CPU Passthrough Mode)

What It Does

When you select:

CPU Type: host

Proxmox exposes the actual physical CPU model and instruction set to the VM.

The guest sees nearly the same CPU capabilities as the bare metal host.

Example:

Physical CPU: AMD EPYC Genoa
VM sees:      AMD EPYC Genoa

What the VM Gets

With host mode, the VM receives:

  • full instruction set

  • AVX / AVX2 / AVX-512 (if available)

  • AES acceleration

  • SHA extensions

  • vector instructions

  • vendor-specific optimizations

  • cache and topology hints

  • microarchitecture tuning

Nothing is artificially masked.


Performance Impact

This is the highest performance CPU mode available in Proxmox.

Workloads that benefit significantly:

  • databases

  • Ceph storage nodes

  • compression and encryption

  • monitoring systems

  • analytics engines

  • AI tooling

  • indexing workloads

  • automation agents

  • crypto operations

Performance gains vs generic CPU models commonly range:

5% → 25% faster

depending on workload type.

Vector-heavy and crypto-heavy tasks benefit the most.


Downsides of Host Mode

The tradeoff is portability.

Live migration requires the destination node to support all exposed CPU features.

If your cluster contains mixed CPU generations, migrations may fail.

Example failure scenario:

  • Node A = EPYC Genoa

  • Node B = EPYC Milan

  • VM created with host mode on Genoa

  • Migration to Milan fails due to missing flags

This is the primary risk of host CPU mode.


CPU Type = x86-64-v2-AES (Standardized Baseline Mode)

What It Does

This mode exposes a standardized CPU feature baseline instead of the real hardware model.

It follows the x86-64 microarchitecture level definitions.

x86-64-v2 includes:

  • SSE4.2

  • POPCNT

  • CMPXCHG16B

  • modern baseline instructions

  • AES acceleration

But excludes newer instruction sets.


What It Hides

Typically not exposed:

  • AVX2

  • AVX-512

  • newer vector instructions

  • some vendor optimizations

  • newer crypto extensions

  • some microarchitecture tuning

This makes it more portable — but slightly less powerful.


Performance Impact

Performance is still good — but not maximum.

Typical difference vs host mode:

~5% to ~20% slower

depending on workload.

Impact is larger for:

  • crypto workloads

  • compression

  • vector math

  • analytics

  • AI inference

  • indexing engines

Smaller for:

  • light application servers

  • web servers

  • simple services


Major Advantage: Migration Safety

This is where x86-64-v2-AES shines.

Because it exposes a portable CPU feature set, VMs can migrate across:

  • different CPU generations

  • mixed hardware clusters

  • rolling hardware upgrades

This makes it ideal for:

  • HA clusters

  • mixed-node environments

  • migration-heavy operations

  • cloud-style VM mobility


Head-to-Head Comparison

Factorhostx86-64-v2-AES
PerformanceHighestGood
Instruction setFullBaseline
AVX2YesUsually no
AES accelYesYes
Crypto speedMaximumMedium
Live migrationRisk across CPU gensSafe
Mixed clusterRiskySafe
Best forPerformancePortability

Cluster Design Strategy

Use Host Mode When

  • cluster nodes use identical CPUs

  • performance matters most

  • workloads are compute-heavy

  • migrations are rare or controlled

  • storage/DB/AI workloads dominate

Ideal for:

  • Ceph nodes

  • databases

  • analytics

  • monitoring systems

  • AI tools

  • automation platforms


Use x86-64-v2-AES When

  • CPUs differ across nodes

  • frequent live migration is required

  • HA across hardware generations

  • rolling hardware upgrades expected

  • portability is more important than peak speed

Ideal for:

  • general app servers

  • web tiers

  • utility VMs

  • HA clusters with mixed hardware


A Practical Rule of Thumb

If your Proxmox cluster hardware is:

Homogeneous → choose host
Mixed → choose x86-64-v2-AES

Performance clusters → host
Mobility clusters → v2 baseline


How to Check If Host Mode Is Safe in Your Cluster

Run on each node:

lscpu | grep "Model name"

If models match exactly → host mode is safe cluster-wide.

If not → use standardized CPU type.


Bonus: Why Not Use kvm64 or qemu64?

These are legacy compatibility models.

They:

  • hide modern CPU features

  • reduce performance

  • limit instruction sets

  • should be avoided for modern workloads

Use only for very old OS compatibility.


Final Recommendation

For performance-focused Proxmox environments:

  • Prefer CPU Type = host

  • Disable ballooning for critical VMs

  • Avoid CPU limits

  • Use VirtIO everywhere

  • Enable NUMA where applicable

For migration-heavy clusters:

  • Use x86-64-v2-AES

  • Standardize across nodes

  • Trade a bit of performance for flexibility


Choosing the right CPU type is one of the simplest changes that can yield real performance gains — or prevent painful migration failures later.

Design for your cluster’s reality, not defaults.