Skip to content
Vladimir Chavkov
Go back

Harvester: Modern Hyperconverged Infrastructure on Kubernetes

Edit page

Harvester: Modern Hyperconverged Infrastructure on Kubernetes

Harvester is an open-source hyperconverged infrastructure (HCI) solution built on Kubernetes, designed as a modern alternative to traditional virtualization platforms like VMware vSphere and Proxmox. Built by Rancher Labs (now SUSE), Harvester combines compute, storage, and networking into a unified platform optimized for edge, data center, and cloud deployments.

What is Harvester?

Harvester is a Kubernetes-native HCI platform that provides enterprise-grade VM management with the flexibility and scalability of cloud-native technologies.

Key Features

  1. VM Management: Full lifecycle management via KubeVirt
  2. Built-in Storage: Longhorn distributed block storage
  3. Software-Defined Networking: Integrated networking and VLAN support
  4. Rancher Integration: Seamless multi-cluster Kubernetes management
  5. Web UI: Modern, intuitive management interface
  6. Live Migration: Zero-downtime VM migration
  7. Backup/Restore: VM snapshots and disaster recovery
  8. Cloud-init: Automated VM configuration
  9. PCI Passthrough: GPU and device passthrough
  10. ARM64 Support: Native ARM support for edge deployments

Architecture

┌─────────────────────────────────────────────────────────────┐
│ Harvester Management Layer │
│ │
│ ┌────────────┐ ┌────────────┐ ┌────────────────────┐ │
│ │ Web UI │ │ Rancher │ │ Monitoring │ │
│ │ Dashboard │ │ Integration│ │ (Prometheus) │ │
│ └────────────┘ └────────────┘ └────────────────────┘ │
└─────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────┐
│ Kubernetes Control Plane (RKE2/K3s) │
│ │
│ ┌──────────────────────────────────────────────────────┐ │
│ │ KubeVirt - VM Orchestration │ │
│ │ • virt-api, virt-controller, virt-handler │ │
│ │ • VM lifecycle management │ │
│ └──────────────────────────────────────────────────────┘ │
│ │
│ ┌──────────────────────────────────────────────────────┐ │
│ │ Longhorn - Distributed Storage │ │
│ │ • Block storage with replication │ │
│ │ • Snapshots and backups │ │
│ └──────────────────────────────────────────────────────┘ │
│ │
│ ┌──────────────────────────────────────────────────────┐ │
│ │ Multus CNI + Harvester Network Controller │ │
│ │ • Management network │ │
│ │ • VLAN networks for VMs │ │
│ │ • Network policies │ │
│ └──────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────┐
│ Bare Metal Nodes │
│ │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ Node 1 │ │ Node 2 │ │ Node 3 │ │
│ │ │ │ │ │ │ │
│ │ QEMU/KVM │ │ QEMU/KVM │ │ QEMU/KVM │ │
│ │ Containers │ │ Containers │ │ Containers │ │
│ │ │ │ │ │ │ │
│ │ VM1 VM2 │ │ VM3 VM4 │ │ VM5 VM6 │ │
│ │ │ │ │ │ │ │
│ │ Local Disks │ │ Local Disks │ │ Local Disks │ │
│ │ (Longhorn) │ │ (Longhorn) │ │ (Longhorn) │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ │
└─────────────────────────────────────────────────────────────┘

Installation

Prerequisites

ISO Installation

Terminal window
# Download Harvester ISO
wget https://releases.rancher.com/harvester/v1.3.0/harvester-v1.3.0-amd64.iso
# Create bootable USB (Linux)
sudo dd if=harvester-v1.3.0-amd64.iso of=/dev/sdX bs=4M status=progress
# Or use Rufus/Etcher on Windows/macOS
# Boot from ISO and follow installer:
# 1. Select "Install Harvester"
# 2. Choose disk for installation
# 3. Configure management network
# 4. Set cluster token (for HA setup)
# 5. Set VIP for cluster access
# 6. Set admin password

Network Configuration During Install

# Management Network Configuration
scheme: dhcp # or static
# For static IP:
scheme: static
bond_mode: active-backup
bond_miimon: 100
bond_slaves:
- enp1s0
- enp2s0
address: 192.168.1.100
netmask: 255.255.255.0
gateway: 192.168.1.1
dns:
- 8.8.8.8
- 1.1.1.1
# Cluster VIP (required for HA)
vip: 192.168.1.99
vip_mode: static # or dhcp

High Availability Cluster

Terminal window
# First Node (creates cluster)
# During installation:
# - Create new cluster
# - Set cluster token: "mysecrettoken"
# - Set VIP: 192.168.1.99
# - Configure networking
# Additional Nodes (join cluster)
# During installation:
# - Join existing cluster
# - Enter cluster VIP: 192.168.1.99
# - Enter cluster token: "mysecrettoken"
# - Configure networking
# Minimum 3 nodes recommended for HA
# Access UI at: https://192.168.1.99

PXE Installation

Terminal window
# Setup PXE server (example with dnsmasq)
sudo apt install dnsmasq pxelinux syslinux-common
# Extract iPXE from ISO
mkdir -p /var/lib/tftpboot/harvester
wget -O /var/lib/tftpboot/harvester/initrd https://releases.rancher.com/harvester/v1.3.0/initrd
wget -O /var/lib/tftpboot/harvester/vmlinuz https://releases.rancher.com/harvester/v1.3.0/vmlinuz
wget -O /var/lib/tftpboot/harvester/rootfs.squashfs https://releases.rancher.com/harvester/v1.3.0/rootfs.squashfs
# Configure dnsmasq
cat > /etc/dnsmasq.d/pxe.conf <<EOF
dhcp-range=192.168.1.100,192.168.1.200,12h
dhcp-boot=pxelinux.0
enable-tftp
tftp-root=/var/lib/tftpboot
EOF
# PXE boot configuration
cat > /var/lib/tftpboot/pxelinux.cfg/default <<EOF
DEFAULT harvester
LABEL harvester
KERNEL harvester/vmlinuz
INITRD harvester/initrd
APPEND root=live:{{ rootfs.squashfs }} harvester.install.automatic=true harvester.install.config_url=http://192.168.1.1/config.yaml
EOF

Automated Installation Config

# config.yaml for automated installation
scheme_version: 1
token: mysecrettoken
os:
hostname: harvester-node1
ssh_authorized_keys:
- ssh-rsa AAAAB3...
install:
mode: create # or "join"
management_interface:
interfaces:
- name: enp1s0
- name: enp2s0
bond_options:
mode: active-backup
miimon: 100
method: static
ip: 192.168.1.101
subnet_mask: 255.255.255.0
gateway: 192.168.1.1
dns_nameservers:
- 8.8.8.8
vip: 192.168.1.99
vip_mode: static
device: /dev/sda
iso_url: https://releases.rancher.com/harvester/v1.3.0/harvester-v1.3.0-amd64.iso
system_settings:
cluster-registration-url: https://rancher.example.com
# For joining nodes:
# install:
# mode: join
# management_interface: ...
# server_url: https://192.168.1.99:443

Virtual Machine Management

Create VM via UI

1. Navigate to Virtual Machines
2. Click "Create"
3. Configure:
- Basic: Name, CPU, Memory
- Volumes: Add root disk (image or blank)
- Networks: Attach to VLAN
- Advanced: cloud-init, SSH keys
4. Click "Create"

Create VM via YAML

apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: ubuntu-vm
namespace: default
labels:
app: ubuntu
spec:
running: true
template:
metadata:
labels:
kubevirt.io/vm: ubuntu-vm
spec:
domain:
cpu:
cores: 4
sockets: 1
threads: 1
resources:
requests:
memory: 8Gi
limits:
memory: 8Gi
devices:
disks:
- name: root
disk:
bus: virtio
- name: cloudinitdisk
disk:
bus: virtio
interfaces:
- name: default
bridge: {}
- name: vlan100
bridge: {}
networks:
- name: default
pod: {}
- name: vlan100
multus:
networkName: vlan100
volumes:
- name: root
persistentVolumeClaim:
claimName: ubuntu-vm-root
- name: cloudinitdisk
cloudInitNoCloud:
userData: |
#cloud-config
password: password
chpasswd: { expire: False }
ssh_pwauth: True
ssh_authorized_keys:
- ssh-rsa AAAAB3...
package_update: true
packages:
- qemu-guest-agent
runcmd:
- systemctl enable --now qemu-guest-agent
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ubuntu-vm-root
namespace: default
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi
storageClassName: longhorn
volumeMode: Block

VM Images

Terminal window
# Access Harvester via kubectl
export KUBECONFIG=/etc/rancher/rke2/rke2.yaml
# Import image via URL
kubectl apply -f - <<EOF
apiVersion: harvesterhci.io/v1beta1
kind: VirtualMachineImage
metadata:
name: ubuntu-22.04
namespace: default
spec:
displayName: Ubuntu 22.04 LTS
sourceType: download
url: https://cloud-images.ubuntu.com/releases/22.04/release/ubuntu-22.04-server-cloudimg-amd64.img
EOF
# Check import status
kubectl get virtualmachineimage -n default
# Import from file
kubectl apply -f - <<EOF
apiVersion: harvesterhci.io/v1beta1
kind: VirtualMachineImage
metadata:
name: custom-image
namespace: default
spec:
displayName: Custom Image
sourceType: upload
pvcName: custom-image-pvc
pvcNamespace: default
EOF
# Upload via UI: Images → Create → Upload

VM Templates

apiVersion: harvesterhci.io/v1beta1
kind: VirtualMachineTemplate
metadata:
name: ubuntu-template
namespace: default
spec:
description: Ubuntu 22.04 Template
defaultVersionId: v1
---
apiVersion: harvesterhci.io/v1beta1
kind: VirtualMachineTemplateVersion
metadata:
name: ubuntu-template-v1
namespace: default
spec:
templateId: default/ubuntu-template
vm:
metadata:
labels:
app: ubuntu
spec:
running: false
template:
spec:
domain:
cpu:
cores: 2
resources:
requests:
memory: 4Gi
devices:
disks:
- name: root
disk:
bus: virtio
- name: cloudinitdisk
disk:
bus: virtio
interfaces:
- name: default
bridge: {}
networks:
- name: default
multus:
networkName: vlan100
volumes:
- name: root
dataVolume:
name: ubuntu-vm-root
- name: cloudinitdisk
cloudInitNoCloud:
userData: |
#cloud-config
ssh_authorized_keys:
- ssh-rsa AAAAB3...

Networking

VLAN Configuration

# Create VLAN Network
apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
name: vlan100
namespace: default
spec:
config: '{
"cniVersion": "0.3.1",
"name": "vlan100",
"type": "bridge",
"bridge": "mgmt-br",
"vlan": 100,
"ipam": {}
}'
Terminal window
# Via Harvester UI:
# Networks → Create
# - Name: vlan100
# - VLAN ID: 100
# - Type: L2VlanNetwork
# - Cluster Network: mgmt (management)

Bond Network Configuration

# Configure network bonding
apiVersion: network.harvesterhci.io/v1beta1
kind: ClusterNetwork
metadata:
name: bond0
spec:
description: Bonded network for storage
enable: true
---
apiVersion: network.harvesterhci.io/v1beta1
kind: NodeNetwork
metadata:
name: harvester-node1-bond0
spec:
nodeName: harvester-node1
nic: bond0
type: bond
bondOptions:
mode: 802.3ad
miimon: 100
slaves:
- enp2s0
- enp3s0

Network Policies

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: vm-network-policy
namespace: default
spec:
podSelector:
matchLabels:
kubevirt.io/domain: ubuntu-vm
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
app: frontend
ports:
- protocol: TCP
port: 22
- protocol: TCP
port: 80
egress:
- to:
- podSelector:
matchLabels:
app: database
ports:
- protocol: TCP
port: 5432

Storage Management

Longhorn Configuration

# Configure storage class
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: longhorn-fast
provisioner: driver.longhorn.io
allowVolumeExpansion: true
reclaimPolicy: Delete
parameters:
numberOfReplicas: "3"
staleReplicaTimeout: "2880"
fromBackup: ""
fsType: "ext4"
dataLocality: "disabled" # or "best-effort", "strict-local"

VM Volume Management

Terminal window
# Expand volume
kubectl patch pvc ubuntu-vm-root -n default -p '{"spec":{"resources":{"requests":{"storage":"100Gi"}}}}'
# Clone volume
kubectl apply -f - <<EOF
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ubuntu-vm-clone
namespace: default
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi
storageClassName: longhorn
dataSource:
kind: PersistentVolumeClaim
name: ubuntu-vm-root
EOF

Backup Configuration

# Configure backup target (S3)
apiVersion: harvesterhci.io/v1beta1
kind: Setting
metadata:
name: backup-target
value: >
{
"type": "s3",
"endpoint": "https://s3.amazonaws.com",
"bucketName": "harvester-backups",
"bucketRegion": "us-east-1",
"accessKeyId": "AKIAIOSFODNN7EXAMPLE",
"secretAccessKey": "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
}
# Or NFS
# value: "nfs://192.168.1.100:/export/backups"

VM Backup and Restore

# Create VM backup
apiVersion: harvesterhci.io/v1beta1
kind: VirtualMachineBackup
metadata:
name: ubuntu-vm-backup
namespace: default
spec:
source:
name: ubuntu-vm
kind: VirtualMachine
apiGroup: kubevirt.io
type: backup
---
# Restore VM from backup
apiVersion: harvesterhci.io/v1beta1
kind: VirtualMachineRestore
metadata:
name: ubuntu-vm-restore
namespace: default
spec:
target:
name: ubuntu-vm-restored
kind: VirtualMachine
apiGroup: kubevirt.io
virtualMachineBackupName: ubuntu-vm-backup

Live Migration

# Migrate VM to another node
apiVersion: kubevirt.io/v1
kind: VirtualMachineInstanceMigration
metadata:
name: ubuntu-vm-migration
namespace: default
spec:
vmiName: ubuntu-vm
# Via kubectl
kubectl virt migrate ubuntu-vm -n default
# Check migration status
kubectl get vmim -n default

Migration Policies

apiVersion: migrations.kubevirt.io/v1alpha1
kind: MigrationPolicy
metadata:
name: live-migration-policy
spec:
selectors:
namespaceSelector:
matchLabels:
environment: production
virtualMachineInstanceSelector:
matchLabels:
workload: database
allowAutoConverge: true
bandwidthPerMigration: 128Mi
completionTimeoutPerGiB: 800
allowPostCopy: false

Rancher Integration

Register Harvester in Rancher

Terminal window
# In Harvester UI: Settings → cluster-registration-url
# Set to: https://rancher.example.com
# In Rancher: Virtualization Management → Import Existing
# Copy registration URL
# In Harvester: Apply registration manifest
kubectl apply -f <registration-url>

Deploy Kubernetes on Harvester VMs

# Create RKE2 cluster on Harvester
apiVersion: provisioning.cattle.io/v1
kind: Cluster
metadata:
name: guest-cluster
namespace: fleet-default
spec:
cloudCredentialSecretName: harvester-cloud-credential
rkeConfig:
machineGlobalConfig:
cni: calico
machinePools:
- name: controlplane
quantity: 3
etcdRole: true
controlPlaneRole: true
workerRole: false
machineConfigRef:
kind: HarvesterConfig
name: harvester-controlplane
- name: worker
quantity: 5
workerRole: true
machineConfigRef:
kind: HarvesterConfig
name: harvester-worker
---
# Harvester machine config
apiVersion: rke-machine-config.cattle.io/v1
kind: HarvesterConfig
metadata:
name: harvester-controlplane
namespace: fleet-default
cpu: 4
memory: 8
diskSize: 100
diskBus: virtio
imageName: default/ubuntu-22.04
networkName: default/vlan100
networkType: dhcp
sshUser: ubuntu
userData: |
#cloud-config
...

GPU Passthrough

Enable PCI Passthrough

Terminal window
# Enable IOMMU in GRUB
sudo vi /etc/default/grub
# Add to GRUB_CMDLINE_LINUX:
# For Intel: intel_iommu=on iommu=pt
# For AMD: amd_iommu=on iommu=pt
sudo update-grub
sudo reboot
# Verify IOMMU
dmesg | grep -i iommu
# Identify GPU PCI address
lspci | grep -i nvidia
# Output: 01:00.0 VGA compatible controller: NVIDIA Corporation ...
# Configure GPU for passthrough
kubectl apply -f - <<EOF
apiVersion: devices.harvesterhci.io/v1beta1
kind: PCIDevice
metadata:
name: nvidia-gpu
spec:
address: "0000:01:00.0"
nodeName: harvester-node1
enabled: true
EOF

Attach GPU to VM

apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: gpu-vm
spec:
template:
spec:
domain:
devices:
gpus:
- name: gpu1
deviceName: nvidia.com/TU104GL_Tesla_T4
hostDevices:
- name: gpu1
deviceName: 0000:01:00.0

Monitoring and Alerting

Access Monitoring

Terminal window
# Port-forward to Prometheus
kubectl port-forward -n cattle-monitoring-system \
svc/rancher-monitoring-prometheus 9090:9090
# Port-forward to Grafana
kubectl port-forward -n cattle-monitoring-system \
svc/rancher-monitoring-grafana 3000:80
# Access Grafana at: http://localhost:3000
# Default credentials: admin / prom-operator

Custom Alerts

apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: harvester-alerts
namespace: cattle-monitoring-system
spec:
groups:
- name: harvester
rules:
- alert: VMDown
expr: kubevirt_vmi_status_phase{phase="Running"} == 0
for: 5m
labels:
severity: critical
annotations:
summary: "VM {{ $labels.name }} is down"
- alert: HighStorageUsage
expr: |
(sum(kubevirt_vmi_storage_write_traffic_bytes_total) by (namespace, name)
/ sum(kubevirt_vmi_storage_capacity_bytes) by (namespace, name)) > 0.85
for: 10m
labels:
severity: warning
annotations:
summary: "VM {{ $labels.name }} storage > 85%"

High Availability Best Practices

Cluster Design

# 3-node minimum for HA
Node Configuration:
- 3+ nodes for quorum
- Even distribution of resources
- Separate storage/management networks
- UPS for power redundancy
# Anti-affinity for VMs
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: critical-vm
spec:
template:
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app: critical-app
topologyKey: kubernetes.io/hostname

Resource Reservations

# Reserve resources for system
apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
name: kata-qemu
handler: kata-qemu
scheduling:
nodeSelector:
workload: vms
tolerations:
- effect: NoSchedule
key: workload
operator: Equal
value: vms
# Node configuration
kubelet-args:
- "kube-reserved=cpu=1000m,memory=2Gi"
- "system-reserved=cpu=1000m,memory=2Gi"
- "eviction-hard=memory.available<1Gi,nodefs.available<10%"

Disaster Recovery

Backup Strategy

Terminal window
# Full cluster backup
kubectl apply -f - <<EOF
apiVersion: harvesterhci.io/v1beta1
kind: VirtualMachineBackup
metadata:
name: cluster-backup
namespace: default
spec:
source:
name: "*"
kind: VirtualMachine
type: backup
schedule: "0 2 * * *"
retain: 7
EOF
# Export cluster configuration
kubectl get all --all-namespaces -o yaml > cluster-backup.yaml
kubectl get crd -o yaml > crd-backup.yaml
kubectl get pv -o yaml > pv-backup.yaml

Restore Procedures

Terminal window
# Restore single VM
kubectl apply -f - <<EOF
apiVersion: harvesterhci.io/v1beta1
kind: VirtualMachineRestore
metadata:
name: vm-restore
namespace: default
spec:
target:
name: restored-vm
virtualMachineBackupName: vm-backup-20260211
newVM: true
EOF
# Restore cluster (disaster recovery)
# 1. Reinstall Harvester on new hardware
# 2. Configure same network settings
# 3. Restore Longhorn backup
# 4. Apply cluster configuration
kubectl apply -f cluster-backup.yaml

Upgrades

Upgrade via UI

1. Navigate to Settings → upgrade
2. Check for available versions
3. Download upgrade image
4. Click "Upgrade"
5. Monitor progress in real-time
6. VMs migrate automatically

Upgrade via CLI

Terminal window
# Check current version
kubectl get settings.harvesterhci.io server-version -o yaml
# Create upgrade
kubectl apply -f - <<EOF
apiVersion: harvesterhci.io/v1beta1
kind: Upgrade
metadata:
name: upgrade-to-v1.3.0
namespace: harvester-system
spec:
version: v1.3.0
image: rancher/harvester-upgrade:v1.3.0
EOF
# Monitor upgrade
kubectl get upgrade -n harvester-system -w

Troubleshooting

Terminal window
# Check cluster health
kubectl get nodes
kubectl get pods -A | grep -v Running
# Check VM status
kubectl get vm,vmi -A
kubectl describe vm <vm-name> -n <namespace>
# Check VM console
virtctl console <vm-name> -n <namespace>
# Check VNC access
virtctl vnc <vm-name> -n <namespace>
# Storage issues
kubectl get pv,pvc -A
kubectl -n longhorn-system get pods
kubectl -n longhorn-system logs <pod-name>
# Network issues
kubectl get network-attachment-definitions -A
kubectl describe network-attachment-definition <name>
# Logs
kubectl logs -n harvester-system -l app=harvester
kubectl logs -n kubevirt -l kubevirt.io=virt-controller
# Support bundle
# Via UI: Support → Generate Support Bundle

Performance Tuning

VM Performance

# Optimized VM configuration
spec:
domain:
cpu:
cores: 8
dedicatedCpuPlacement: true
isolateEmulatorThread: true
model: host-passthrough
resources:
requests:
memory: 16Gi
limits:
memory: 16Gi
overcommitGuestOverhead: false
features:
acpi: {}
apic: {}
hyperv:
relaxed: {}
vapic: {}
spinlocks:
spinlocks: 8191
vpindex: {}
runtime: {}
synic: {}
stimer: {}
reset: {}
frequencies: {}
reenlightenment: {}
tlbflush: {}
ipi: {}
devices:
disks:
- name: root
disk:
bus: virtio
cache: writeback
io: threads

Conclusion

Harvester represents the future of hyperconverged infrastructure, combining the power of Kubernetes with traditional VM management. Its cloud-native architecture, seamless Rancher integration, and comprehensive feature set make it an excellent choice for modern data centers, edge deployments, and hybrid cloud environments.


Master Harvester HCI and Kubernetes virtualization through our training programs. Contact us for virtualization and cloud-native training.


Edit page
Share this post on:

Previous Post
OpenShift Pipelines: Complete Tekton CI/CD Guide
Next Post
Red Hat OpenShift: Complete Enterprise Kubernetes Platform Guide