Harvester: Modern Hyperconverged Infrastructure on Kubernetes
Harvester is an open-source hyperconverged infrastructure (HCI) solution built on Kubernetes, designed as a modern alternative to traditional virtualization platforms like VMware vSphere and Proxmox. Built by Rancher Labs (now SUSE), Harvester combines compute, storage, and networking into a unified platform optimized for edge, data center, and cloud deployments.
What is Harvester?
Harvester is a Kubernetes-native HCI platform that provides enterprise-grade VM management with the flexibility and scalability of cloud-native technologies.
Key Features
- VM Management: Full lifecycle management via KubeVirt
- Built-in Storage: Longhorn distributed block storage
- Software-Defined Networking: Integrated networking and VLAN support
- Rancher Integration: Seamless multi-cluster Kubernetes management
- Web UI: Modern, intuitive management interface
- Live Migration: Zero-downtime VM migration
- Backup/Restore: VM snapshots and disaster recovery
- Cloud-init: Automated VM configuration
- PCI Passthrough: GPU and device passthrough
- ARM64 Support: Native ARM support for edge deployments
Architecture
┌─────────────────────────────────────────────────────────────┐│ Harvester Management Layer ││ ││ ┌────────────┐ ┌────────────┐ ┌────────────────────┐ ││ │ Web UI │ │ Rancher │ │ Monitoring │ ││ │ Dashboard │ │ Integration│ │ (Prometheus) │ ││ └────────────┘ └────────────┘ └────────────────────┘ │└─────────────────────────────────────────────────────────────┘ │┌─────────────────────────────────────────────────────────────┐│ Kubernetes Control Plane (RKE2/K3s) ││ ││ ┌──────────────────────────────────────────────────────┐ ││ │ KubeVirt - VM Orchestration │ ││ │ • virt-api, virt-controller, virt-handler │ ││ │ • VM lifecycle management │ ││ └──────────────────────────────────────────────────────┘ ││ ││ ┌──────────────────────────────────────────────────────┐ ││ │ Longhorn - Distributed Storage │ ││ │ • Block storage with replication │ ││ │ • Snapshots and backups │ ││ └──────────────────────────────────────────────────────┘ ││ ││ ┌──────────────────────────────────────────────────────┐ ││ │ Multus CNI + Harvester Network Controller │ ││ │ • Management network │ ││ │ • VLAN networks for VMs │ ││ │ • Network policies │ ││ └──────────────────────────────────────────────────────┘ │└─────────────────────────────────────────────────────────────┘ │┌─────────────────────────────────────────────────────────────┐│ Bare Metal Nodes ││ ││ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ ││ │ Node 1 │ │ Node 2 │ │ Node 3 │ ││ │ │ │ │ │ │ ││ │ QEMU/KVM │ │ QEMU/KVM │ │ QEMU/KVM │ ││ │ Containers │ │ Containers │ │ Containers │ ││ │ │ │ │ │ │ ││ │ VM1 VM2 │ │ VM3 VM4 │ │ VM5 VM6 │ ││ │ │ │ │ │ │ ││ │ Local Disks │ │ Local Disks │ │ Local Disks │ ││ │ (Longhorn) │ │ (Longhorn) │ │ (Longhorn) │ ││ └─────────────┘ └─────────────┘ └─────────────┘ │└─────────────────────────────────────────────────────────────┘Installation
Prerequisites
- Hardware: x86-64 or ARM64 CPU with virtualization support
- RAM: 32 GB minimum (64 GB+ recommended for production)
- Storage: 250 GB+ SSD/NVMe per node
- Network:
- Management network (1 Gbps minimum)
- Dedicated storage network (10 Gbps recommended)
- VLAN support for VM networks
- BIOS: VT-x/AMD-V and VT-d/AMD-Vi enabled
ISO Installation
# Download Harvester ISOwget https://releases.rancher.com/harvester/v1.3.0/harvester-v1.3.0-amd64.iso
# Create bootable USB (Linux)sudo dd if=harvester-v1.3.0-amd64.iso of=/dev/sdX bs=4M status=progress
# Or use Rufus/Etcher on Windows/macOS
# Boot from ISO and follow installer:# 1. Select "Install Harvester"# 2. Choose disk for installation# 3. Configure management network# 4. Set cluster token (for HA setup)# 5. Set VIP for cluster access# 6. Set admin passwordNetwork Configuration During Install
# Management Network Configurationscheme: dhcp # or static
# For static IP:scheme: staticbond_mode: active-backupbond_miimon: 100bond_slaves: - enp1s0 - enp2s0address: 192.168.1.100netmask: 255.255.255.0gateway: 192.168.1.1dns: - 8.8.8.8 - 1.1.1.1
# Cluster VIP (required for HA)vip: 192.168.1.99vip_mode: static # or dhcpHigh Availability Cluster
# First Node (creates cluster)# During installation:# - Create new cluster# - Set cluster token: "mysecrettoken"# - Set VIP: 192.168.1.99# - Configure networking
# Additional Nodes (join cluster)# During installation:# - Join existing cluster# - Enter cluster VIP: 192.168.1.99# - Enter cluster token: "mysecrettoken"# - Configure networking
# Minimum 3 nodes recommended for HA# Access UI at: https://192.168.1.99PXE Installation
# Setup PXE server (example with dnsmasq)sudo apt install dnsmasq pxelinux syslinux-common
# Extract iPXE from ISOmkdir -p /var/lib/tftpboot/harvesterwget -O /var/lib/tftpboot/harvester/initrd https://releases.rancher.com/harvester/v1.3.0/initrdwget -O /var/lib/tftpboot/harvester/vmlinuz https://releases.rancher.com/harvester/v1.3.0/vmlinuzwget -O /var/lib/tftpboot/harvester/rootfs.squashfs https://releases.rancher.com/harvester/v1.3.0/rootfs.squashfs
# Configure dnsmasqcat > /etc/dnsmasq.d/pxe.conf <<EOFdhcp-range=192.168.1.100,192.168.1.200,12hdhcp-boot=pxelinux.0enable-tftptftp-root=/var/lib/tftpbootEOF
# PXE boot configurationcat > /var/lib/tftpboot/pxelinux.cfg/default <<EOFDEFAULT harvesterLABEL harvester KERNEL harvester/vmlinuz INITRD harvester/initrd APPEND root=live:{{ rootfs.squashfs }} harvester.install.automatic=true harvester.install.config_url=http://192.168.1.1/config.yamlEOFAutomated Installation Config
# config.yaml for automated installationscheme_version: 1token: mysecrettokenos: hostname: harvester-node1 ssh_authorized_keys: - ssh-rsa AAAAB3...
install: mode: create # or "join" management_interface: interfaces: - name: enp1s0 - name: enp2s0 bond_options: mode: active-backup miimon: 100 method: static ip: 192.168.1.101 subnet_mask: 255.255.255.0 gateway: 192.168.1.1 dns_nameservers: - 8.8.8.8
vip: 192.168.1.99 vip_mode: static
device: /dev/sda iso_url: https://releases.rancher.com/harvester/v1.3.0/harvester-v1.3.0-amd64.iso
system_settings: cluster-registration-url: https://rancher.example.com
# For joining nodes:# install:# mode: join# management_interface: ...# server_url: https://192.168.1.99:443Virtual Machine Management
Create VM via UI
1. Navigate to Virtual Machines2. Click "Create"3. Configure: - Basic: Name, CPU, Memory - Volumes: Add root disk (image or blank) - Networks: Attach to VLAN - Advanced: cloud-init, SSH keys4. Click "Create"Create VM via YAML
apiVersion: kubevirt.io/v1kind: VirtualMachinemetadata: name: ubuntu-vm namespace: default labels: app: ubuntuspec: running: true template: metadata: labels: kubevirt.io/vm: ubuntu-vm spec: domain: cpu: cores: 4 sockets: 1 threads: 1 resources: requests: memory: 8Gi limits: memory: 8Gi devices: disks: - name: root disk: bus: virtio - name: cloudinitdisk disk: bus: virtio interfaces: - name: default bridge: {} - name: vlan100 bridge: {} networks: - name: default pod: {} - name: vlan100 multus: networkName: vlan100 volumes: - name: root persistentVolumeClaim: claimName: ubuntu-vm-root - name: cloudinitdisk cloudInitNoCloud: userData: | #cloud-config password: password chpasswd: { expire: False } ssh_pwauth: True ssh_authorized_keys: - ssh-rsa AAAAB3... package_update: true packages: - qemu-guest-agent runcmd: - systemctl enable --now qemu-guest-agent---apiVersion: v1kind: PersistentVolumeClaimmetadata: name: ubuntu-vm-root namespace: defaultspec: accessModes: - ReadWriteOnce resources: requests: storage: 50Gi storageClassName: longhorn volumeMode: BlockVM Images
# Access Harvester via kubectlexport KUBECONFIG=/etc/rancher/rke2/rke2.yaml
# Import image via URLkubectl apply -f - <<EOFapiVersion: harvesterhci.io/v1beta1kind: VirtualMachineImagemetadata: name: ubuntu-22.04 namespace: defaultspec: displayName: Ubuntu 22.04 LTS sourceType: download url: https://cloud-images.ubuntu.com/releases/22.04/release/ubuntu-22.04-server-cloudimg-amd64.imgEOF
# Check import statuskubectl get virtualmachineimage -n default
# Import from filekubectl apply -f - <<EOFapiVersion: harvesterhci.io/v1beta1kind: VirtualMachineImagemetadata: name: custom-image namespace: defaultspec: displayName: Custom Image sourceType: upload pvcName: custom-image-pvc pvcNamespace: defaultEOF
# Upload via UI: Images → Create → UploadVM Templates
apiVersion: harvesterhci.io/v1beta1kind: VirtualMachineTemplatemetadata: name: ubuntu-template namespace: defaultspec: description: Ubuntu 22.04 Template defaultVersionId: v1---apiVersion: harvesterhci.io/v1beta1kind: VirtualMachineTemplateVersionmetadata: name: ubuntu-template-v1 namespace: defaultspec: templateId: default/ubuntu-template vm: metadata: labels: app: ubuntu spec: running: false template: spec: domain: cpu: cores: 2 resources: requests: memory: 4Gi devices: disks: - name: root disk: bus: virtio - name: cloudinitdisk disk: bus: virtio interfaces: - name: default bridge: {} networks: - name: default multus: networkName: vlan100 volumes: - name: root dataVolume: name: ubuntu-vm-root - name: cloudinitdisk cloudInitNoCloud: userData: | #cloud-config ssh_authorized_keys: - ssh-rsa AAAAB3...Networking
VLAN Configuration
# Create VLAN NetworkapiVersion: k8s.cni.cncf.io/v1kind: NetworkAttachmentDefinitionmetadata: name: vlan100 namespace: defaultspec: config: '{ "cniVersion": "0.3.1", "name": "vlan100", "type": "bridge", "bridge": "mgmt-br", "vlan": 100, "ipam": {} }'# Via Harvester UI:# Networks → Create# - Name: vlan100# - VLAN ID: 100# - Type: L2VlanNetwork# - Cluster Network: mgmt (management)Bond Network Configuration
# Configure network bondingapiVersion: network.harvesterhci.io/v1beta1kind: ClusterNetworkmetadata: name: bond0spec: description: Bonded network for storage enable: true---apiVersion: network.harvesterhci.io/v1beta1kind: NodeNetworkmetadata: name: harvester-node1-bond0spec: nodeName: harvester-node1 nic: bond0 type: bond bondOptions: mode: 802.3ad miimon: 100 slaves: - enp2s0 - enp3s0Network Policies
apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: vm-network-policy namespace: defaultspec: podSelector: matchLabels: kubevirt.io/domain: ubuntu-vm policyTypes: - Ingress - Egress ingress: - from: - podSelector: matchLabels: app: frontend ports: - protocol: TCP port: 22 - protocol: TCP port: 80 egress: - to: - podSelector: matchLabels: app: database ports: - protocol: TCP port: 5432Storage Management
Longhorn Configuration
# Configure storage classapiVersion: storage.k8s.io/v1kind: StorageClassmetadata: name: longhorn-fastprovisioner: driver.longhorn.ioallowVolumeExpansion: truereclaimPolicy: Deleteparameters: numberOfReplicas: "3" staleReplicaTimeout: "2880" fromBackup: "" fsType: "ext4" dataLocality: "disabled" # or "best-effort", "strict-local"VM Volume Management
# Expand volumekubectl patch pvc ubuntu-vm-root -n default -p '{"spec":{"resources":{"requests":{"storage":"100Gi"}}}}'
# Clone volumekubectl apply -f - <<EOFapiVersion: v1kind: PersistentVolumeClaimmetadata: name: ubuntu-vm-clone namespace: defaultspec: accessModes: - ReadWriteOnce resources: requests: storage: 50Gi storageClassName: longhorn dataSource: kind: PersistentVolumeClaim name: ubuntu-vm-rootEOFBackup Configuration
# Configure backup target (S3)apiVersion: harvesterhci.io/v1beta1kind: Settingmetadata: name: backup-targetvalue: > { "type": "s3", "endpoint": "https://s3.amazonaws.com", "bucketName": "harvester-backups", "bucketRegion": "us-east-1", "accessKeyId": "AKIAIOSFODNN7EXAMPLE", "secretAccessKey": "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY" }
# Or NFS# value: "nfs://192.168.1.100:/export/backups"VM Backup and Restore
# Create VM backupapiVersion: harvesterhci.io/v1beta1kind: VirtualMachineBackupmetadata: name: ubuntu-vm-backup namespace: defaultspec: source: name: ubuntu-vm kind: VirtualMachine apiGroup: kubevirt.io type: backup
---# Restore VM from backupapiVersion: harvesterhci.io/v1beta1kind: VirtualMachineRestoremetadata: name: ubuntu-vm-restore namespace: defaultspec: target: name: ubuntu-vm-restored kind: VirtualMachine apiGroup: kubevirt.io virtualMachineBackupName: ubuntu-vm-backupLive Migration
# Migrate VM to another nodeapiVersion: kubevirt.io/v1kind: VirtualMachineInstanceMigrationmetadata: name: ubuntu-vm-migration namespace: defaultspec: vmiName: ubuntu-vm
# Via kubectlkubectl virt migrate ubuntu-vm -n default
# Check migration statuskubectl get vmim -n defaultMigration Policies
apiVersion: migrations.kubevirt.io/v1alpha1kind: MigrationPolicymetadata: name: live-migration-policyspec: selectors: namespaceSelector: matchLabels: environment: production virtualMachineInstanceSelector: matchLabels: workload: database allowAutoConverge: true bandwidthPerMigration: 128Mi completionTimeoutPerGiB: 800 allowPostCopy: falseRancher Integration
Register Harvester in Rancher
# In Harvester UI: Settings → cluster-registration-url# Set to: https://rancher.example.com
# In Rancher: Virtualization Management → Import Existing# Copy registration URL
# In Harvester: Apply registration manifestkubectl apply -f <registration-url>Deploy Kubernetes on Harvester VMs
# Create RKE2 cluster on HarvesterapiVersion: provisioning.cattle.io/v1kind: Clustermetadata: name: guest-cluster namespace: fleet-defaultspec: cloudCredentialSecretName: harvester-cloud-credential
rkeConfig: machineGlobalConfig: cni: calico
machinePools: - name: controlplane quantity: 3 etcdRole: true controlPlaneRole: true workerRole: false machineConfigRef: kind: HarvesterConfig name: harvester-controlplane
- name: worker quantity: 5 workerRole: true machineConfigRef: kind: HarvesterConfig name: harvester-worker
---# Harvester machine configapiVersion: rke-machine-config.cattle.io/v1kind: HarvesterConfigmetadata: name: harvester-controlplane namespace: fleet-defaultcpu: 4memory: 8diskSize: 100diskBus: virtioimageName: default/ubuntu-22.04networkName: default/vlan100networkType: dhcpsshUser: ubuntuuserData: | #cloud-config ...GPU Passthrough
Enable PCI Passthrough
# Enable IOMMU in GRUBsudo vi /etc/default/grub# Add to GRUB_CMDLINE_LINUX:# For Intel: intel_iommu=on iommu=pt# For AMD: amd_iommu=on iommu=pt
sudo update-grubsudo reboot
# Verify IOMMUdmesg | grep -i iommu
# Identify GPU PCI addresslspci | grep -i nvidia# Output: 01:00.0 VGA compatible controller: NVIDIA Corporation ...
# Configure GPU for passthroughkubectl apply -f - <<EOFapiVersion: devices.harvesterhci.io/v1beta1kind: PCIDevicemetadata: name: nvidia-gpuspec: address: "0000:01:00.0" nodeName: harvester-node1 enabled: trueEOFAttach GPU to VM
apiVersion: kubevirt.io/v1kind: VirtualMachinemetadata: name: gpu-vmspec: template: spec: domain: devices: gpus: - name: gpu1 deviceName: nvidia.com/TU104GL_Tesla_T4 hostDevices: - name: gpu1 deviceName: 0000:01:00.0Monitoring and Alerting
Access Monitoring
# Port-forward to Prometheuskubectl port-forward -n cattle-monitoring-system \ svc/rancher-monitoring-prometheus 9090:9090
# Port-forward to Grafanakubectl port-forward -n cattle-monitoring-system \ svc/rancher-monitoring-grafana 3000:80
# Access Grafana at: http://localhost:3000# Default credentials: admin / prom-operatorCustom Alerts
apiVersion: monitoring.coreos.com/v1kind: PrometheusRulemetadata: name: harvester-alerts namespace: cattle-monitoring-systemspec: groups: - name: harvester rules: - alert: VMDown expr: kubevirt_vmi_status_phase{phase="Running"} == 0 for: 5m labels: severity: critical annotations: summary: "VM {{ $labels.name }} is down"
- alert: HighStorageUsage expr: | (sum(kubevirt_vmi_storage_write_traffic_bytes_total) by (namespace, name) / sum(kubevirt_vmi_storage_capacity_bytes) by (namespace, name)) > 0.85 for: 10m labels: severity: warning annotations: summary: "VM {{ $labels.name }} storage > 85%"High Availability Best Practices
Cluster Design
# 3-node minimum for HANode Configuration: - 3+ nodes for quorum - Even distribution of resources - Separate storage/management networks - UPS for power redundancy
# Anti-affinity for VMsapiVersion: kubevirt.io/v1kind: VirtualMachinemetadata: name: critical-vmspec: template: spec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchLabels: app: critical-app topologyKey: kubernetes.io/hostnameResource Reservations
# Reserve resources for systemapiVersion: node.k8s.io/v1kind: RuntimeClassmetadata: name: kata-qemuhandler: kata-qemuscheduling: nodeSelector: workload: vms tolerations: - effect: NoSchedule key: workload operator: Equal value: vms
# Node configurationkubelet-args: - "kube-reserved=cpu=1000m,memory=2Gi" - "system-reserved=cpu=1000m,memory=2Gi" - "eviction-hard=memory.available<1Gi,nodefs.available<10%"Disaster Recovery
Backup Strategy
# Full cluster backupkubectl apply -f - <<EOFapiVersion: harvesterhci.io/v1beta1kind: VirtualMachineBackupmetadata: name: cluster-backup namespace: defaultspec: source: name: "*" kind: VirtualMachine type: backup schedule: "0 2 * * *" retain: 7EOF
# Export cluster configurationkubectl get all --all-namespaces -o yaml > cluster-backup.yamlkubectl get crd -o yaml > crd-backup.yamlkubectl get pv -o yaml > pv-backup.yamlRestore Procedures
# Restore single VMkubectl apply -f - <<EOFapiVersion: harvesterhci.io/v1beta1kind: VirtualMachineRestoremetadata: name: vm-restore namespace: defaultspec: target: name: restored-vm virtualMachineBackupName: vm-backup-20260211 newVM: trueEOF
# Restore cluster (disaster recovery)# 1. Reinstall Harvester on new hardware# 2. Configure same network settings# 3. Restore Longhorn backup# 4. Apply cluster configurationkubectl apply -f cluster-backup.yamlUpgrades
Upgrade via UI
1. Navigate to Settings → upgrade2. Check for available versions3. Download upgrade image4. Click "Upgrade"5. Monitor progress in real-time6. VMs migrate automaticallyUpgrade via CLI
# Check current versionkubectl get settings.harvesterhci.io server-version -o yaml
# Create upgradekubectl apply -f - <<EOFapiVersion: harvesterhci.io/v1beta1kind: Upgrademetadata: name: upgrade-to-v1.3.0 namespace: harvester-systemspec: version: v1.3.0 image: rancher/harvester-upgrade:v1.3.0EOF
# Monitor upgradekubectl get upgrade -n harvester-system -wTroubleshooting
# Check cluster healthkubectl get nodeskubectl get pods -A | grep -v Running
# Check VM statuskubectl get vm,vmi -Akubectl describe vm <vm-name> -n <namespace>
# Check VM consolevirtctl console <vm-name> -n <namespace>
# Check VNC accessvirtctl vnc <vm-name> -n <namespace>
# Storage issueskubectl get pv,pvc -Akubectl -n longhorn-system get podskubectl -n longhorn-system logs <pod-name>
# Network issueskubectl get network-attachment-definitions -Akubectl describe network-attachment-definition <name>
# Logskubectl logs -n harvester-system -l app=harvesterkubectl logs -n kubevirt -l kubevirt.io=virt-controller
# Support bundle# Via UI: Support → Generate Support BundlePerformance Tuning
VM Performance
# Optimized VM configurationspec: domain: cpu: cores: 8 dedicatedCpuPlacement: true isolateEmulatorThread: true model: host-passthrough resources: requests: memory: 16Gi limits: memory: 16Gi overcommitGuestOverhead: false features: acpi: {} apic: {} hyperv: relaxed: {} vapic: {} spinlocks: spinlocks: 8191 vpindex: {} runtime: {} synic: {} stimer: {} reset: {} frequencies: {} reenlightenment: {} tlbflush: {} ipi: {} devices: disks: - name: root disk: bus: virtio cache: writeback io: threadsConclusion
Harvester represents the future of hyperconverged infrastructure, combining the power of Kubernetes with traditional VM management. Its cloud-native architecture, seamless Rancher integration, and comprehensive feature set make it an excellent choice for modern data centers, edge deployments, and hybrid cloud environments.
Master Harvester HCI and Kubernetes virtualization through our training programs. Contact us for virtualization and cloud-native training.