Skip to content
Vladimir Chavkov
Go back

Proxmox VE: Complete Open-Source Virtualization Platform Guide

Edit page

Proxmox VE: Complete Open-Source Virtualization Platform Guide

Proxmox Virtual Environment (Proxmox VE) is an open-source server virtualization platform that combines two virtualization technologies: KVM (Kernel-based Virtual Machine) for full virtualization and LXC (Linux Containers) for lightweight container-based virtualization. This comprehensive guide covers everything you need to build production-grade virtualization infrastructure with Proxmox.

What is Proxmox VE?

Proxmox VE is a complete, open-source virtualization management solution that provides:

Key Features

  1. Dual Virtualization: KVM for VMs + LXC for containers
  2. Web-Based Management: Intuitive GUI for all operations
  3. High Availability: Built-in HA with automatic failover
  4. Clustering: Multi-node clusters with live migration
  5. Software-Defined Storage: Ceph, ZFS, and distributed storage
  6. Backup & Restore: Integrated backup solution
  7. No Vendor Lock-In: Based on standard Linux technologies

Proxmox vs. Other Virtualization Platforms

FeatureProxmox VEVMware ESXiHyper-VXCP-ng
CostFree/Open SourceCommercialIncluded with WindowsFree/Open Source
Web UI✅ Included⚠️ Requires vCenter⚠️ Separate✅ XO Lite
Containers✅ LXC Native❌ No❌ No❌ No
Ceph Integration✅ Native❌ No❌ No❌ No
Clustering✅ Built-in⚠️ Requires vCenter⚠️ Failover Cluster✅ Pools
LicenseAGPLv3CommercialMicrosoftGPLv2

Architecture

Proxmox VE Stack

┌────────────────────────────────────────────────────────────┐
│ Web Interface │
│ (https://proxmox-host:8006) │
└────────────────────────────────────────────────────────────┘
┌────────────────────────────────────────────────────────────┐
│ Proxmox VE Management Layer │
│ │
│ • Cluster Manager (pmxcfs) │
│ • HA Manager │
│ • Resource Scheduler │
│ • Backup Manager │
└────────────────────────────────────────────────────────────┘
┌─────────────────┴─────────────────┐
▼ ▼
┌───────────────────┐ ┌───────────────────┐
│ KVM/QEMU │ │ LXC │
│ (Full VMs) │ │ (Containers) │
│ │ │ │
│ • Windows │ │ • Linux only │
│ • Linux │ │ • Lightweight │
│ • Any OS │ │ • Fast startup │
└───────────────────┘ └───────────────────┘
┌────────────────────────────────────────────────────────────┐
│ Storage Layer │
│ │
│ • Local Storage (LVM, ZFS, Directory) │
│ • Network Storage (NFS, iSCSI, Ceph, GlusterFS) │
│ • Software-Defined Storage (Ceph, ZFS replication) │
└────────────────────────────────────────────────────────────┘
┌────────────────────────────────────────────────────────────┐
│ Networking Layer │
│ │
│ • Linux Bridge │
│ • Open vSwitch │
│ • VLAN support │
│ • SDN (Software-Defined Networking) │
└────────────────────────────────────────────────────────────┘
┌────────────────────────────────────────────────────────────┐
│ Debian Linux Base (Bookworm) │
└────────────────────────────────────────────────────────────┘

Installation

System Requirements

Minimum:

Recommended Production:

Installation Steps

  1. Download ISO: Get latest ISO from proxmox.com

  2. Boot from ISO: Use USB/DVD or mount via IPMI

  3. Follow Installer:

    - Select target disk
    - Set root password
    - Configure network (static IP recommended)
    - Set hostname (FQDN format: pve1.example.com)
    - Select timezone
  4. Post-Installation Configuration:

Terminal window
# Update system
apt update && apt full-upgrade -y
# Remove enterprise repository (if not subscribed)
rm /etc/apt/sources.list.d/pve-enterprise.list
# Add no-subscription repository
cat > /etc/apt/sources.list.d/pve-no-subscription.list << 'EOF'
deb http://download.proxmox.com/debian/pve bookworm pve-no-subscription
EOF
# Update again
apt update && apt dist-upgrade -y
# Install useful tools
apt install -y \
vim \
tmux \
htop \
iotop \
iftop \
ncdu \
ethtool \
smartmontools
# Reboot if kernel updated
reboot

Creating Virtual Machines

Using Web Interface

  1. Navigate to Datacenter → Node → Create VM
  2. Configure:
    • VM ID (unique number)
    • Name
    • OS type
    • ISO image
    • CPU, RAM, Disk
    • Network

Using Command Line

Terminal window
# Create VM with qm command
qm create 100 \
--name web-server-01 \
--memory 4096 \
--cores 2 \
--sockets 1 \
--cpu host \
--net0 virtio,bridge=vmbr0 \
--scsihw virtio-scsi-pci \
--scsi0 local-lvm:32 \
--ide2 local:iso/debian-12.iso,media=cdrom \
--boot order=scsi0 \
--ostype l26 \
--agent 1
# Start VM
qm start 100
# View status
qm status 100
# Access console
qm terminal 100
# Stop VM
qm stop 100
# Delete VM
qm destroy 100

Cloud-Init Template

Terminal window
# Download cloud image
cd /var/lib/vz/template/iso
wget https://cloud.debian.org/images/cloud/bookworm/latest/debian-12-generic-amd64.qcow2
# Create template VM
qm create 9000 \
--name debian-12-template \
--memory 2048 \
--cores 2 \
--net0 virtio,bridge=vmbr0
# Import disk
qm importdisk 9000 debian-12-generic-amd64.qcow2 local-lvm
# Configure VM
qm set 9000 --scsihw virtio-scsi-pci --scsi0 local-lvm:vm-9000-disk-0
qm set 9000 --boot order=scsi0
qm set 9000 --ide2 local-lvm:cloudinit
qm set 9000 --serial0 socket --vga serial0
qm set 9000 --agent enabled=1
# Configure cloud-init
qm set 9000 --ipconfig0 ip=dhcp
qm set 9000 --sshkey ~/.ssh/id_rsa.pub
qm set 9000 --ciuser admin
# Convert to template
qm template 9000
# Clone from template
qm clone 9000 200 \
--name web-server-01 \
--full \
--storage local-lvm
# Customize clone
qm set 200 --ipconfig0 ip=10.0.1.100/24,gw=10.0.1.1
qm set 200 --nameserver 8.8.8.8
# Start cloned VM
qm start 200

Linux Containers (LXC)

Create Container

Terminal window
# List available templates
pveam available
# Download template
pveam download local debian-12-standard_12.0-1_amd64.tar.zst
# Create container
pct create 101 \
local:vztmpl/debian-12-standard_12.0-1_amd64.tar.zst \
--hostname web-container \
--memory 2048 \
--cores 2 \
--net0 name=eth0,bridge=vmbr0,ip=dhcp \
--storage local-lvm \
--rootfs local-lvm:8 \
--password 'SecurePassword123!' \
--unprivileged 1 \
--features nesting=1
# Start container
pct start 101
# Enter container
pct enter 101
# Container info
pct status 101
pct config 101
# Stop container
pct stop 101

Privileged vs Unprivileged Containers

Terminal window
# Unprivileged (recommended for security)
pct create 102 local:vztmpl/debian-12-standard_12.0-1_amd64.tar.zst \
--unprivileged 1
# Privileged (more compatibility, less secure)
pct create 103 local:vztmpl/debian-12-standard_12.0-1_amd64.tar.zst \
--unprivileged 0
# Enable nesting for Docker
pct set 102 --features nesting=1
# Mount host directory
pct set 102 --mp0 /mnt/data,mp=/data

Clustering

Create Cluster

Terminal window
# On first node
pvecm create production-cluster
# Check status
pvecm status
# Show cluster config
cat /etc/pve/corosync.conf

Join Cluster

Terminal window
# On second node (before joining, ensure unique hostname and IP)
pvecm add 10.0.1.10 # IP of first node
# On subsequent nodes
pvecm add 10.0.1.10

Cluster Configuration

Terminal window
# View cluster nodes
pvecm nodes
# Expected votes and quorum
pvecm expected 3
# Remove node from cluster (from another node)
pvecm delnode node-name
# Cluster network configuration
pvecm updatecerts

Quorum and Fencing

Terminal window
# Check quorum
pvecm status
# Set expected votes (for maintenance)
pvecm expected 2
# View quorum device (if configured)
pvecm status | grep -i quorum

High Availability

Configure HA

Terminal window
# Create HA group
ha-manager groupadd production \
--nodes pve1,pve2,pve3 \
--restricted 0 \
--nofailback 0
# Add VM to HA
ha-manager add vm:100 \
--group production \
--max_restart 3 \
--max_relocate 3 \
--state started
# Add container to HA
ha-manager add ct:101 \
--group production \
--state started
# View HA status
ha-manager status
# Remove from HA
ha-manager remove vm:100

Fencing Configuration

Terminal window
# Configure watchdog (required for HA)
echo "softdog" >> /etc/modules
modprobe softdog
# Verify watchdog
lsmod | grep dog
# Configure fence device (IPMI example)
cat > /etc/pve/ha/fence.cfg << 'EOF'
device: ipmi-pve1
action stonith
type ipmi
host 10.0.1.100
user admin
password secret
EOF

Storage Configuration

Local Storage Types

ZFS

Terminal window
# Create ZFS pool
zpool create -f tank \
mirror /dev/sdb /dev/sdc \
mirror /dev/sdd /dev/sde
# Enable compression
zfs set compression=lz4 tank
# Create dataset for VMs
zfs create tank/vm-storage
# Add to Proxmox
pvesm add zfspool local-zfs \
--pool tank/vm-storage \
--content images,rootdir
# ZFS snapshots
zfs snapshot tank/vm-storage@backup-$(date +%Y%m%d)
zfs list -t snapshot

LVM-Thin

Terminal window
# Create volume group
vgcreate vg-storage /dev/sdb /dev/sdc
# Create thin pool
lvcreate -L 1.9T -n data vg-storage
lvcreate -L 100G -n metadata vg-storage
lvconvert --type thin-pool \
--poolmetadata vg-storage/metadata \
vg-storage/data
# Add to Proxmox
pvesm add lvmthin local-lvm-thin \
--vgname vg-storage \
--thinpool data \
--content images,rootdir

Network Storage

NFS

Terminal window
# Add NFS storage
pvesm add nfs nfs-storage \
--server 10.0.1.50 \
--export /export/proxmox \
--content images,iso,backup,vztmpl \
--options vers=4.1
# Mount options for performance
pvesm set nfs-storage --options "vers=4.1,hard,intr,rsize=32768,wsize=32768"

iSCSI

Terminal window
# Add iSCSI target
pvesm add iscsi iscsi-storage \
--portal 10.0.1.60 \
--target iqn.2024-01.com.example:storage \
--content images
# Add iSCSI LVM
pvesm add iscsidirect iscsi-lvm \
--portal 10.0.1.60 \
--target iqn.2024-01.com.example:storage \
--content images

Ceph

Terminal window
# Install Ceph on all nodes
pveceph install
# Initialize Ceph on first node
pveceph init --network 10.0.2.0/24
# Create monitors on each node
pveceph mon create
# Create OSDs (one per disk)
pveceph osd create /dev/sdc
pveceph osd create /dev/sdd
pveceph osd create /dev/sde
# Create CephFS metadata servers
pveceph mds create
# Create pools
pveceph pool create vm-storage --size 3 --min_size 2
pveceph pool create cephfs-data --size 3
pveceph pool create cephfs-metadata --size 3
# Create CephFS
pveceph fs create --pg_num 128 --add-storage
# Add Ceph storage to Proxmox
pvesm add rbd ceph-storage \
--pool vm-storage \
--content images \
--krbd 1
# Check Ceph status
pveceph status
ceph -s

Networking

Linux Bridge Configuration

/etc/network/interfaces
auto lo
iface lo inet loopback
# Management interface
auto eno1
iface eno1 inet static
address 10.0.1.10/24
gateway 10.0.1.1
# Bridge for VMs
auto vmbr0
iface vmbr0 inet manual
bridge-ports eno2
bridge-stp off
bridge-fd 0
# VLAN-aware bridge
auto vmbr1
iface vmbr1 inet manual
bridge-ports eno3
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 10-100
# Apply changes
ifreload -a

Bond Configuration

Terminal window
# /etc/network/interfaces - Active-Backup
auto bond0
iface bond0 inet manual
bond-slaves eno1 eno2
bond-miimon 100
bond-mode active-backup
auto vmbr0
iface vmbr0 inet static
address 10.0.1.10/24
gateway 10.0.1.1
bridge-ports bond0
bridge-stp off
bridge-fd 0
# LACP (802.3ad)
auto bond1
iface bond1 inet manual
bond-slaves eno3 eno4
bond-miimon 100
bond-mode 802.3ad
bond-xmit-hash-policy layer2+3
auto vmbr1
iface vmbr1 inet manual
bridge-ports bond1
bridge-stp off
bridge-fd 0

Software-Defined Networking (SDN)

Terminal window
# Create VXLAN zone
pvesh create /cluster/sdn/zones \
--zone vxlan-zone \
--type vxlan \
--peers 10.0.1.11,10.0.1.12,10.0.1.13
# Create VNet
pvesh create /cluster/sdn/vnets \
--vnet vnet100 \
--zone vxlan-zone \
--tag 100
# Apply SDN configuration
pvesh set /cluster/sdn

Backup and Restore

Backup Configuration

Terminal window
# Create backup via CLI
vzdump 100 \
--storage backup-nfs \
--mode snapshot \
--compress zstd \
--notes "Daily backup"
# Backup all VMs
vzdump --all \
--storage backup-nfs \
--mode snapshot \
--compress zstd
# Backup specific VMs
vzdump 100,101,102 \
--storage backup-nfs \
--mode snapshot

Scheduled Backups

Configure via Web UI: Datacenter → Backup

Or via CLI:

/etc/pve/vzdump.cron
# Daily backup at 2 AM
0 2 * * * root vzdump --all --mode snapshot --storage backup-nfs --compress zstd --mailto admin@example.com

Restore

Terminal window
# List backups
ls -lh /mnt/pve/backup-nfs/dump/
# Restore VM
qmrestore /mnt/pve/backup-nfs/dump/vzdump-qemu-100-2026_02_10-02_00_00.vma.zst 100 \
--storage local-lvm
# Restore to different VM ID
qmrestore /mnt/pve/backup-nfs/dump/vzdump-qemu-100-2026_02_10-02_00_00.vma.zst 200
# Restore container
pct restore 101 /mnt/pve/backup-nfs/dump/vzdump-lxc-101-2026_02_10-02_00_00.tar.zst \
--storage local-lvm

Live Migration

Prerequisites

Migrate VM

Terminal window
# Online migration (with running VM)
qm migrate 100 pve2 --online
# Offline migration
qm migrate 100 pve2
# With specific network
qm migrate 100 pve2 --online --migration_network 10.0.2.0/24
# Migrate container
pct migrate 101 pve2 --restart

Monitoring and Management

Command-Line Monitoring

Terminal window
# Node status
pvesh get /nodes/pve1/status
# VM list
qm list
# Container list
pct list
# Resource usage
pvesh get /nodes/pve1/status
# Storage usage
pvesh get /storage
# Network statistics
pvesh get /nodes/pve1/network
# Real-time monitoring
watch -n 1 'qm list; echo ""; pct list'

Prometheus Exporter

Terminal window
# Install Proxmox VE exporter
wget https://github.com/prometheus-pve/prometheus-pve-exporter/releases/download/v3.3.4/prometheus-pve-exporter_3.3.4_all.deb
dpkg -i prometheus-pve-exporter_3.3.4_all.deb
# Configure
cat > /etc/prometheus-pve-exporter.yml << 'EOF'
default:
user: monitoring@pve
password: secret_password
verify_ssl: false
EOF
# Start service
systemctl enable --now prometheus-pve-exporter
# Exporter runs on port 9221
curl http://localhost:9221/pve?target=pve1

Security Best Practices

Firewall Configuration

Terminal window
# Enable firewall
pvesh set /cluster/firewall/options --enable 1
# Configure datacenter firewall
cat > /etc/pve/firewall/cluster.fw << 'EOF'
[OPTIONS]
enable: 1
[RULES]
GROUP management
IN ACCEPT -source 10.0.1.0/24 -dport 8006 -proto tcp
IN ACCEPT -source 10.0.1.0/24 -dport 22 -proto tcp
[GROUP management]
EOF
# Node-specific firewall
cat > /etc/pve/nodes/pve1/firewall/node.fw << 'EOF'
[OPTIONS]
enable: 1
[RULES]
GROUP management
IN DROP -dport 8006
EOF

Two-Factor Authentication

Terminal window
# Install required packages
apt install libpam-google-authenticator
# Configure for user
google-authenticator
# Enable in Proxmox
# Web UI: Datacenter → Permissions → Two Factor

SSL Certificate

Terminal window
# Using Let's Encrypt
apt install python3-certbot-dns-cloudflare
# Configure credentials
cat > /root/.cloudflare.ini << 'EOF'
dns_cloudflare_api_token = your-api-token
EOF
chmod 600 /root/.cloudflare.ini
# Get certificate
certbot certonly \
--dns-cloudflare \
--dns-cloudflare-credentials /root/.cloudflare.ini \
-d pve1.example.com
# Install certificate
pvenode cert set \
/etc/letsencrypt/live/pve1.example.com/fullchain.pem \
/etc/letsencrypt/live/pve1.example.com/privkey.pem
# Restart proxy
systemctl restart pveproxy

Performance Tuning

CPU Configuration

Terminal window
# Set CPU governor to performance
echo performance | tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
# Make persistent
apt install cpufrequtils
echo 'GOVERNOR="performance"' > /etc/default/cpufrequtils
systemctl restart cpufrequtils
# Disable CPU C-states for lower latency
# Add to kernel command line: processor.max_cstate=1 intel_idle.max_cstate=0

Memory Configuration

Terminal window
# Enable KSM (Kernel Same-page Merging)
systemctl enable --now ksmtuned
# Adjust swappiness
echo "vm.swappiness=10" >> /etc/sysctl.conf
sysctl -p

I/O Scheduler

Terminal window
# Set to none for NVMe
echo none > /sys/block/nvme0n1/queue/scheduler
# Set to mq-deadline for SSDs
echo mq-deadline > /sys/block/sda/queue/scheduler
# Make persistent with udev rule
cat > /etc/udev/rules.d/60-scheduler.rules << 'EOF'
ACTION=="add|change", KERNEL=="nvme[0-9]n[0-9]", ATTR{queue/scheduler}="none"
ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/rotational}=="0", ATTR{queue/scheduler}="mq-deadline"
EOF

Production Checklist

Infrastructure

Configuration

Storage

Monitoring

Security

Conclusion

Proxmox VE provides an enterprise-grade virtualization platform with the benefits of open-source software. Its combination of KVM and LXC, integrated clustering, HA capabilities, and comprehensive storage options make it an excellent choice for organizations seeking a powerful, cost-effective alternative to proprietary virtualization platforms.

The platform’s mature ecosystem, active community, and professional support options ensure that Proxmox VE can meet the demands of production environments while maintaining the flexibility and transparency of open-source software.


Master virtualization technologies including Proxmox VE with our infrastructure training programs. Contact us for customized training designed for your team’s needs.


Edit page
Share this post on:

Previous Post
OpenStack: Complete Private Cloud Platform Deployment Guide
Next Post
Oracle Autonomous Database: Self-Driving Cloud Database Guide