Proxmox VE: Complete Open-Source Virtualization Platform Guide
Proxmox Virtual Environment (Proxmox VE) is an open-source server virtualization platform that combines two virtualization technologies: KVM (Kernel-based Virtual Machine) for full virtualization and LXC (Linux Containers) for lightweight container-based virtualization. This comprehensive guide covers everything you need to build production-grade virtualization infrastructure with Proxmox.
What is Proxmox VE?
Proxmox VE is a complete, open-source virtualization management solution that provides:
Key Features
- Dual Virtualization: KVM for VMs + LXC for containers
- Web-Based Management: Intuitive GUI for all operations
- High Availability: Built-in HA with automatic failover
- Clustering: Multi-node clusters with live migration
- Software-Defined Storage: Ceph, ZFS, and distributed storage
- Backup & Restore: Integrated backup solution
- No Vendor Lock-In: Based on standard Linux technologies
Proxmox vs. Other Virtualization Platforms
| Feature | Proxmox VE | VMware ESXi | Hyper-V | XCP-ng |
|---|---|---|---|---|
| Cost | Free/Open Source | Commercial | Included with Windows | Free/Open Source |
| Web UI | ✅ Included | ⚠️ Requires vCenter | ⚠️ Separate | ✅ XO Lite |
| Containers | ✅ LXC Native | ❌ No | ❌ No | ❌ No |
| Ceph Integration | ✅ Native | ❌ No | ❌ No | ❌ No |
| Clustering | ✅ Built-in | ⚠️ Requires vCenter | ⚠️ Failover Cluster | ✅ Pools |
| License | AGPLv3 | Commercial | Microsoft | GPLv2 |
Architecture
Proxmox VE Stack
┌────────────────────────────────────────────────────────────┐│ Web Interface ││ (https://proxmox-host:8006) │└────────────────────────────────────────────────────────────┘ │┌────────────────────────────────────────────────────────────┐│ Proxmox VE Management Layer ││ ││ • Cluster Manager (pmxcfs) ││ • HA Manager ││ • Resource Scheduler ││ • Backup Manager │└────────────────────────────────────────────────────────────┘ │ ┌─────────────────┴─────────────────┐ ▼ ▼┌───────────────────┐ ┌───────────────────┐│ KVM/QEMU │ │ LXC ││ (Full VMs) │ │ (Containers) ││ │ │ ││ • Windows │ │ • Linux only ││ • Linux │ │ • Lightweight ││ • Any OS │ │ • Fast startup │└───────────────────┘ └───────────────────┘ │┌────────────────────────────────────────────────────────────┐│ Storage Layer ││ ││ • Local Storage (LVM, ZFS, Directory) ││ • Network Storage (NFS, iSCSI, Ceph, GlusterFS) ││ • Software-Defined Storage (Ceph, ZFS replication) │└────────────────────────────────────────────────────────────┘ │┌────────────────────────────────────────────────────────────┐│ Networking Layer ││ ││ • Linux Bridge ││ • Open vSwitch ││ • VLAN support ││ • SDN (Software-Defined Networking) │└────────────────────────────────────────────────────────────┘ │┌────────────────────────────────────────────────────────────┐│ Debian Linux Base (Bookworm) │└────────────────────────────────────────────────────────────┘Installation
System Requirements
Minimum:
- 64-bit CPU with virtualization support (Intel VT/AMD-V)
- 2 GB RAM
- 32 GB disk space
- 1 Gbps network
Recommended Production:
- Modern multi-core CPU (Intel Xeon/AMD EPYC)
- 64+ GB RAM
- Enterprise SSDs (NVMe preferred)
- 10 Gbps network (bonded)
- Redundant power supplies
- IPMI/iLO for remote management
Installation Steps
-
Download ISO: Get latest ISO from proxmox.com
-
Boot from ISO: Use USB/DVD or mount via IPMI
-
Follow Installer:
- Select target disk- Set root password- Configure network (static IP recommended)- Set hostname (FQDN format: pve1.example.com)- Select timezone -
Post-Installation Configuration:
# Update systemapt update && apt full-upgrade -y
# Remove enterprise repository (if not subscribed)rm /etc/apt/sources.list.d/pve-enterprise.list
# Add no-subscription repositorycat > /etc/apt/sources.list.d/pve-no-subscription.list << 'EOF'deb http://download.proxmox.com/debian/pve bookworm pve-no-subscriptionEOF
# Update againapt update && apt dist-upgrade -y
# Install useful toolsapt install -y \ vim \ tmux \ htop \ iotop \ iftop \ ncdu \ ethtool \ smartmontools
# Reboot if kernel updatedrebootCreating Virtual Machines
Using Web Interface
- Navigate to Datacenter → Node → Create VM
- Configure:
- VM ID (unique number)
- Name
- OS type
- ISO image
- CPU, RAM, Disk
- Network
Using Command Line
# Create VM with qm commandqm create 100 \ --name web-server-01 \ --memory 4096 \ --cores 2 \ --sockets 1 \ --cpu host \ --net0 virtio,bridge=vmbr0 \ --scsihw virtio-scsi-pci \ --scsi0 local-lvm:32 \ --ide2 local:iso/debian-12.iso,media=cdrom \ --boot order=scsi0 \ --ostype l26 \ --agent 1
# Start VMqm start 100
# View statusqm status 100
# Access consoleqm terminal 100
# Stop VMqm stop 100
# Delete VMqm destroy 100Cloud-Init Template
# Download cloud imagecd /var/lib/vz/template/isowget https://cloud.debian.org/images/cloud/bookworm/latest/debian-12-generic-amd64.qcow2
# Create template VMqm create 9000 \ --name debian-12-template \ --memory 2048 \ --cores 2 \ --net0 virtio,bridge=vmbr0
# Import diskqm importdisk 9000 debian-12-generic-amd64.qcow2 local-lvm
# Configure VMqm set 9000 --scsihw virtio-scsi-pci --scsi0 local-lvm:vm-9000-disk-0qm set 9000 --boot order=scsi0qm set 9000 --ide2 local-lvm:cloudinitqm set 9000 --serial0 socket --vga serial0qm set 9000 --agent enabled=1
# Configure cloud-initqm set 9000 --ipconfig0 ip=dhcpqm set 9000 --sshkey ~/.ssh/id_rsa.pubqm set 9000 --ciuser admin
# Convert to templateqm template 9000
# Clone from templateqm clone 9000 200 \ --name web-server-01 \ --full \ --storage local-lvm
# Customize cloneqm set 200 --ipconfig0 ip=10.0.1.100/24,gw=10.0.1.1qm set 200 --nameserver 8.8.8.8
# Start cloned VMqm start 200Linux Containers (LXC)
Create Container
# List available templatespveam available
# Download templatepveam download local debian-12-standard_12.0-1_amd64.tar.zst
# Create containerpct create 101 \ local:vztmpl/debian-12-standard_12.0-1_amd64.tar.zst \ --hostname web-container \ --memory 2048 \ --cores 2 \ --net0 name=eth0,bridge=vmbr0,ip=dhcp \ --storage local-lvm \ --rootfs local-lvm:8 \ --password 'SecurePassword123!' \ --unprivileged 1 \ --features nesting=1
# Start containerpct start 101
# Enter containerpct enter 101
# Container infopct status 101pct config 101
# Stop containerpct stop 101Privileged vs Unprivileged Containers
# Unprivileged (recommended for security)pct create 102 local:vztmpl/debian-12-standard_12.0-1_amd64.tar.zst \ --unprivileged 1
# Privileged (more compatibility, less secure)pct create 103 local:vztmpl/debian-12-standard_12.0-1_amd64.tar.zst \ --unprivileged 0
# Enable nesting for Dockerpct set 102 --features nesting=1
# Mount host directorypct set 102 --mp0 /mnt/data,mp=/dataClustering
Create Cluster
# On first nodepvecm create production-cluster
# Check statuspvecm status
# Show cluster configcat /etc/pve/corosync.confJoin Cluster
# On second node (before joining, ensure unique hostname and IP)pvecm add 10.0.1.10 # IP of first node
# On subsequent nodespvecm add 10.0.1.10Cluster Configuration
# View cluster nodespvecm nodes
# Expected votes and quorumpvecm expected 3
# Remove node from cluster (from another node)pvecm delnode node-name
# Cluster network configurationpvecm updatecertsQuorum and Fencing
# Check quorumpvecm status
# Set expected votes (for maintenance)pvecm expected 2
# View quorum device (if configured)pvecm status | grep -i quorumHigh Availability
Configure HA
# Create HA groupha-manager groupadd production \ --nodes pve1,pve2,pve3 \ --restricted 0 \ --nofailback 0
# Add VM to HAha-manager add vm:100 \ --group production \ --max_restart 3 \ --max_relocate 3 \ --state started
# Add container to HAha-manager add ct:101 \ --group production \ --state started
# View HA statusha-manager status
# Remove from HAha-manager remove vm:100Fencing Configuration
# Configure watchdog (required for HA)echo "softdog" >> /etc/modulesmodprobe softdog
# Verify watchdoglsmod | grep dog
# Configure fence device (IPMI example)cat > /etc/pve/ha/fence.cfg << 'EOF'device: ipmi-pve1 action stonith type ipmi host 10.0.1.100 user admin password secretEOFStorage Configuration
Local Storage Types
ZFS
# Create ZFS poolzpool create -f tank \ mirror /dev/sdb /dev/sdc \ mirror /dev/sdd /dev/sde
# Enable compressionzfs set compression=lz4 tank
# Create dataset for VMszfs create tank/vm-storage
# Add to Proxmoxpvesm add zfspool local-zfs \ --pool tank/vm-storage \ --content images,rootdir
# ZFS snapshotszfs snapshot tank/vm-storage@backup-$(date +%Y%m%d)zfs list -t snapshotLVM-Thin
# Create volume groupvgcreate vg-storage /dev/sdb /dev/sdc
# Create thin poollvcreate -L 1.9T -n data vg-storagelvcreate -L 100G -n metadata vg-storagelvconvert --type thin-pool \ --poolmetadata vg-storage/metadata \ vg-storage/data
# Add to Proxmoxpvesm add lvmthin local-lvm-thin \ --vgname vg-storage \ --thinpool data \ --content images,rootdirNetwork Storage
NFS
# Add NFS storagepvesm add nfs nfs-storage \ --server 10.0.1.50 \ --export /export/proxmox \ --content images,iso,backup,vztmpl \ --options vers=4.1
# Mount options for performancepvesm set nfs-storage --options "vers=4.1,hard,intr,rsize=32768,wsize=32768"iSCSI
# Add iSCSI targetpvesm add iscsi iscsi-storage \ --portal 10.0.1.60 \ --target iqn.2024-01.com.example:storage \ --content images
# Add iSCSI LVMpvesm add iscsidirect iscsi-lvm \ --portal 10.0.1.60 \ --target iqn.2024-01.com.example:storage \ --content imagesCeph
# Install Ceph on all nodespveceph install
# Initialize Ceph on first nodepveceph init --network 10.0.2.0/24
# Create monitors on each nodepveceph mon create
# Create OSDs (one per disk)pveceph osd create /dev/sdcpveceph osd create /dev/sddpveceph osd create /dev/sde
# Create CephFS metadata serverspveceph mds create
# Create poolspveceph pool create vm-storage --size 3 --min_size 2pveceph pool create cephfs-data --size 3pveceph pool create cephfs-metadata --size 3
# Create CephFSpveceph fs create --pg_num 128 --add-storage
# Add Ceph storage to Proxmoxpvesm add rbd ceph-storage \ --pool vm-storage \ --content images \ --krbd 1
# Check Ceph statuspveceph statusceph -sNetworking
Linux Bridge Configuration
auto loiface lo inet loopback
# Management interfaceauto eno1iface eno1 inet static address 10.0.1.10/24 gateway 10.0.1.1
# Bridge for VMsauto vmbr0iface vmbr0 inet manual bridge-ports eno2 bridge-stp off bridge-fd 0
# VLAN-aware bridgeauto vmbr1iface vmbr1 inet manual bridge-ports eno3 bridge-stp off bridge-fd 0 bridge-vlan-aware yes bridge-vids 10-100
# Apply changesifreload -aBond Configuration
# /etc/network/interfaces - Active-Backupauto bond0iface bond0 inet manual bond-slaves eno1 eno2 bond-miimon 100 bond-mode active-backup
auto vmbr0iface vmbr0 inet static address 10.0.1.10/24 gateway 10.0.1.1 bridge-ports bond0 bridge-stp off bridge-fd 0
# LACP (802.3ad)auto bond1iface bond1 inet manual bond-slaves eno3 eno4 bond-miimon 100 bond-mode 802.3ad bond-xmit-hash-policy layer2+3
auto vmbr1iface vmbr1 inet manual bridge-ports bond1 bridge-stp off bridge-fd 0Software-Defined Networking (SDN)
# Create VXLAN zonepvesh create /cluster/sdn/zones \ --zone vxlan-zone \ --type vxlan \ --peers 10.0.1.11,10.0.1.12,10.0.1.13
# Create VNetpvesh create /cluster/sdn/vnets \ --vnet vnet100 \ --zone vxlan-zone \ --tag 100
# Apply SDN configurationpvesh set /cluster/sdnBackup and Restore
Backup Configuration
# Create backup via CLIvzdump 100 \ --storage backup-nfs \ --mode snapshot \ --compress zstd \ --notes "Daily backup"
# Backup all VMsvzdump --all \ --storage backup-nfs \ --mode snapshot \ --compress zstd
# Backup specific VMsvzdump 100,101,102 \ --storage backup-nfs \ --mode snapshotScheduled Backups
Configure via Web UI: Datacenter → Backup
Or via CLI:
# Daily backup at 2 AM0 2 * * * root vzdump --all --mode snapshot --storage backup-nfs --compress zstd --mailto admin@example.comRestore
# List backupsls -lh /mnt/pve/backup-nfs/dump/
# Restore VMqmrestore /mnt/pve/backup-nfs/dump/vzdump-qemu-100-2026_02_10-02_00_00.vma.zst 100 \ --storage local-lvm
# Restore to different VM IDqmrestore /mnt/pve/backup-nfs/dump/vzdump-qemu-100-2026_02_10-02_00_00.vma.zst 200
# Restore containerpct restore 101 /mnt/pve/backup-nfs/dump/vzdump-lxc-101-2026_02_10-02_00_00.tar.zst \ --storage local-lvmLive Migration
Prerequisites
- Shared storage or replicated storage
- Same CPU family (or CPU type set to generic)
- Network connectivity between nodes
Migrate VM
# Online migration (with running VM)qm migrate 100 pve2 --online
# Offline migrationqm migrate 100 pve2
# With specific networkqm migrate 100 pve2 --online --migration_network 10.0.2.0/24
# Migrate containerpct migrate 101 pve2 --restartMonitoring and Management
Command-Line Monitoring
# Node statuspvesh get /nodes/pve1/status
# VM listqm list
# Container listpct list
# Resource usagepvesh get /nodes/pve1/status
# Storage usagepvesh get /storage
# Network statisticspvesh get /nodes/pve1/network
# Real-time monitoringwatch -n 1 'qm list; echo ""; pct list'Prometheus Exporter
# Install Proxmox VE exporterwget https://github.com/prometheus-pve/prometheus-pve-exporter/releases/download/v3.3.4/prometheus-pve-exporter_3.3.4_all.debdpkg -i prometheus-pve-exporter_3.3.4_all.deb
# Configurecat > /etc/prometheus-pve-exporter.yml << 'EOF'default: user: monitoring@pve password: secret_password verify_ssl: falseEOF
# Start servicesystemctl enable --now prometheus-pve-exporter
# Exporter runs on port 9221curl http://localhost:9221/pve?target=pve1Security Best Practices
Firewall Configuration
# Enable firewallpvesh set /cluster/firewall/options --enable 1
# Configure datacenter firewallcat > /etc/pve/firewall/cluster.fw << 'EOF'[OPTIONS]enable: 1
[RULES]GROUP managementIN ACCEPT -source 10.0.1.0/24 -dport 8006 -proto tcpIN ACCEPT -source 10.0.1.0/24 -dport 22 -proto tcp
[GROUP management]EOF
# Node-specific firewallcat > /etc/pve/nodes/pve1/firewall/node.fw << 'EOF'[OPTIONS]enable: 1
[RULES]GROUP managementIN DROP -dport 8006EOFTwo-Factor Authentication
# Install required packagesapt install libpam-google-authenticator
# Configure for usergoogle-authenticator
# Enable in Proxmox# Web UI: Datacenter → Permissions → Two FactorSSL Certificate
# Using Let's Encryptapt install python3-certbot-dns-cloudflare
# Configure credentialscat > /root/.cloudflare.ini << 'EOF'dns_cloudflare_api_token = your-api-tokenEOFchmod 600 /root/.cloudflare.ini
# Get certificatecertbot certonly \ --dns-cloudflare \ --dns-cloudflare-credentials /root/.cloudflare.ini \ -d pve1.example.com
# Install certificatepvenode cert set \ /etc/letsencrypt/live/pve1.example.com/fullchain.pem \ /etc/letsencrypt/live/pve1.example.com/privkey.pem
# Restart proxysystemctl restart pveproxyPerformance Tuning
CPU Configuration
# Set CPU governor to performanceecho performance | tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
# Make persistentapt install cpufrequtilsecho 'GOVERNOR="performance"' > /etc/default/cpufrequtilssystemctl restart cpufrequtils
# Disable CPU C-states for lower latency# Add to kernel command line: processor.max_cstate=1 intel_idle.max_cstate=0Memory Configuration
# Enable KSM (Kernel Same-page Merging)systemctl enable --now ksmtuned
# Adjust swappinessecho "vm.swappiness=10" >> /etc/sysctl.confsysctl -pI/O Scheduler
# Set to none for NVMeecho none > /sys/block/nvme0n1/queue/scheduler
# Set to mq-deadline for SSDsecho mq-deadline > /sys/block/sda/queue/scheduler
# Make persistent with udev rulecat > /etc/udev/rules.d/60-scheduler.rules << 'EOF'ACTION=="add|change", KERNEL=="nvme[0-9]n[0-9]", ATTR{queue/scheduler}="none"ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/rotational}=="0", ATTR{queue/scheduler}="mq-deadline"EOFProduction Checklist
Infrastructure
- Minimum 3 nodes for HA quorum
- Redundant network (bonding/LACP)
- Shared or replicated storage
- UPS and redundant power
- IPMI/iLO configured
Configuration
- Cluster configured and tested
- HA enabled for critical VMs
- Automated backups configured
- Firewall rules configured
- SSL certificates installed
- 2FA enabled for admin accounts
Storage
- Storage redundancy (RAID/Ceph)
- Backup storage separate from production
- Backup retention policy defined
- Restore tested regularly
Monitoring
- Monitoring solution deployed
- Alert thresholds configured
- Log aggregation setup
- Documentation updated
Security
- Network segmentation
- Firewall enabled
- Regular security updates
- Access controls audited
Conclusion
Proxmox VE provides an enterprise-grade virtualization platform with the benefits of open-source software. Its combination of KVM and LXC, integrated clustering, HA capabilities, and comprehensive storage options make it an excellent choice for organizations seeking a powerful, cost-effective alternative to proprietary virtualization platforms.
The platform’s mature ecosystem, active community, and professional support options ensure that Proxmox VE can meet the demands of production environments while maintaining the flexibility and transparency of open-source software.
Master virtualization technologies including Proxmox VE with our infrastructure training programs. Contact us for customized training designed for your team’s needs.