Skip to content
Vladimir Chavkov
Go back

Proxmox Backup Server: Enterprise Backup and Recovery Guide

Edit page

Proxmox Backup Server: Enterprise Backup and Recovery Guide

Data protection is not optional in production environments. Lost data means lost revenue, compliance violations, and reputational damage. Proxmox Backup Server (PBS) is an open-source enterprise backup solution designed specifically for virtual machines, containers, and physical hosts. It provides chunk-level deduplication, client-side encryption, and tight integration with Proxmox VE — all without per-socket licensing fees. This guide covers everything needed to deploy, configure, and operate PBS in production.

What Is Proxmox Backup Server?

Proxmox Backup Server is a purpose-built backup solution developed by Proxmox Server Solutions GmbH. Written primarily in Rust for performance and memory safety, PBS stores backups in a content-addressable datastore using fixed-size chunks. This architecture enables efficient deduplication across all backups regardless of source, drastically reducing storage consumption.

Core Capabilities

Architecture Overview

+-----------------------------------------------------------+
| Proxmox VE Cluster |
| +----------+ +----------+ +----------+ +----------+ |
| | Node 1 | | Node 2 | | Node 3 | | Node 4 | |
| | VMs/CTs | | VMs/CTs | | VMs/CTs | | VMs/CTs | |
| +----+-----+ +----+-----+ +----+-----+ +----+-----+ |
| | | | | |
+-------+--------------+--------------+--------------+-------+
| | | |
v v v v
+-----------------------------------------------------------+
| Proxmox Backup Server (Primary) |
| +------------------+ +------------------+ |
| | Datastore: local | | Datastore: fast | |
| | (HDD - capacity) | | (SSD - speed) | |
| +------------------+ +------------------+ |
| | Chunk Store | | Chunk Store | |
| | Index Files | | Index Files | |
| | Manifests | | Manifests | |
| +------------------+ +------------------+ |
+----------------------------+------------------------------+
|
Remote Sync / Tape
|
+--------------+--------------+
v v
+-------------------------+ +-------------------------+
| PBS Remote (Offsite) | | Tape Library (LTO) |
| Datastore: offsite-sync | | Air-gapped archival |
+-------------------------+ +-------------------------+

PBS vs Other Backup Solutions

FeaturePBSVeeam B&RBaculaBorgBackupRestic
LicenseAGPL v3 (Free)Commercial ($$$)AGPL / CommercialBSDBSD
Cost (100 VMs)$0 (optional support)$50,000-$200,000+$10,000-$50,000FreeFree
DeduplicationChunk-level (variable)Per-job / GlobalGlobal (plugin)Chunk-levelChunk-level
EncryptionAES-256-GCM client-sideAES-256 server-sideTLS + VolumeAES-256 client-sideAES-256 client-side
PVE IntegrationNativePlugin requiredAgent-basedManual scriptsManual scripts
VM BackupSnapshot-based (QEMU)VSS / AgentAgent-basedFile-levelFile-level
Container BackupLXC nativeLimitedAgent-basedFile-levelFile-level
Tape SupportLTO (native)FullFullNoneNone
Web UIYesYes (Windows)BWeb (paid)NoneNone
REST APIYesYesLimitedNoneNone
Remote SyncNativeWAN AcceleratorStorage DaemonBorg transferRclone
VerificationBuilt-in jobsSureBackupVerify jobsborg checkrestic check
Written InRustC# / .NETC/C++Python / CGo
CompressionZstandardVariesGZIP/LZOLZ4/ZSTD/LZMAZstandard
IncrementalChunk-based (always)CBT / syntheticLevel-basedChunk-basedChunk-based

When to Choose PBS

When PBS May Not Be the Best Fit

Installation and Initial Setup

System Requirements

ComponentMinimumRecommended (Production)
CPU2 cores8+ cores (Xeon/EPYC)
RAM2 GB16-32 GB
OS Disk32 GB SSD64 GB SSD (mirror)
Datastore500 GB HDDMulti-TB RAID / ZFS
Network1 Gbps10 Gbps + dedicated backup VLAN

Installation from ISO

Download the PBS ISO from the official Proxmox website and install to your server.

Terminal window
# After installation, access the web UI at:
# https://<pbs-ip>:8007
# Verify the service is running
systemctl status proxmox-backup-proxy
systemctl status proxmox-backup
# Check the installed version
proxmox-backup-manager versions

Installation on Existing Debian

If you prefer installing PBS on an existing Debian 12 (Bookworm) system:

Terminal window
# Add the Proxmox Backup Server repository
echo "deb http://download.proxmox.com/debian/pbs bookworm pbs-no-subscription" \
> /etc/apt/sources.list.d/pbs.list
# Add the Proxmox repository key
wget https://enterprise.proxmox.com/debian/proxmox-release-bookworm.gpg \
-O /etc/apt/trusted.gpg.d/proxmox-release-bookworm.gpg
# Update and install
apt update
apt install proxmox-backup-server -y
# Start and enable the services
systemctl enable --now proxmox-backup-proxy
systemctl enable --now proxmox-backup

Post-Installation Configuration

Terminal window
# Set the timezone
timedatectl set-timezone Europe/Berlin
# Configure NTP for consistent backup timestamps
apt install chrony -y
systemctl enable --now chrony
# Create an admin user (beyond root@pam)
proxmox-backup-manager user create backupadmin@pbs \
--comment "Backup Administrator" \
--email admin@example.com
# Set password
proxmox-backup-manager user update backupadmin@pbs --password <password>
# Assign admin role
proxmox-backup-manager acl update / Admin --auth-id backupadmin@pbs

Network Configuration for Dedicated Backup Traffic

Terminal window
# /etc/network/interfaces - Dedicated backup VLAN
auto ens18
iface ens18 inet static
address 10.0.1.10/24
gateway 10.0.1.1
auto ens19
iface ens19 inet static
address 10.10.50.10/24
# No gateway - backup-only network
mtu 9000

Datastore Configuration and Management

A datastore is the core storage unit in PBS. Each datastore contains a chunk store, index files, and backup manifests.

Creating a Datastore

Terminal window
# Create the backing directory on your storage
mkdir -p /mnt/backup-storage/datastore1
# If using ZFS (recommended for production)
zpool create -o ashift=12 backup-pool mirror /dev/sda /dev/sdb
zfs create -o compression=off -o atime=off -o recordsize=64k backup-pool/datastore1
# Register the datastore in PBS
proxmox-backup-manager datastore create ds1 /backup-pool/datastore1 \
--comment "Primary backup datastore"
# Verify creation
proxmox-backup-manager datastore list

Datastore Directory Structure

/backup-pool/datastore1/
├── .chunks/ # Chunk store (deduplicated data)
│ ├── 0000/ # Chunk directory shards
│ │ ├── ab3f...7c2d.blob # Encrypted/compressed chunks
│ │ └── ...
│ ├── 0001/
│ └── ...
├── vm/ # VM backups
│ └── 100/ # VMID 100
│ ├── 2026-02-11T10:00:00Z/
│ │ ├── index.json.blob
│ │ ├── qemu-server.conf.blob
│ │ └── drive-scsi0.img.fidx
│ └── ...
├── ct/ # Container backups
│ └── 200/
│ └── 2026-02-11T10:00:00Z/
│ ├── index.json.blob
│ └── pct.conf.blob
└── host/ # Host backups (proxmox-backup-client)
└── myserver/
└── 2026-02-11T10:00:00Z/
└── root.pxar.didx

Configuring Datastore Tuning

/etc/proxmox-backup/datastore.cfg
# Edit datastore configuration
datastore: ds1
path /backup-pool/datastore1
comment Primary backup datastore
gc-schedule daily
prune-schedule daily
keep-daily 7
keep-weekly 4
keep-monthly 6
keep-yearly 2
verify-new true
notify always

Multiple Datastores Strategy

Terminal window
# Fast datastore on NVMe for recent/critical backups
proxmox-backup-manager datastore create ds-fast /nvme-pool/fast-backups \
--keep-daily 3 \
--keep-weekly 2 \
--gc-schedule "daily" \
--comment "Fast tier - NVMe - critical VMs"
# Capacity datastore on HDD for long retention
proxmox-backup-manager datastore create ds-archive /hdd-pool/archive-backups \
--keep-daily 7 \
--keep-weekly 4 \
--keep-monthly 12 \
--keep-yearly 5 \
--gc-schedule "sat 02:00" \
--comment "Archive tier - HDD - long retention"

Integration with Proxmox VE

Adding PBS to Proxmox VE

Terminal window
# On the Proxmox VE node, add the PBS storage
pvesm add pbs pbs-primary \
--server 10.10.50.10 \
--datastore ds1 \
--username backupuser@pbs \
--password <password> \
--fingerprint <SHA256-fingerprint>
# Get the PBS server fingerprint from PBS:
proxmox-backup-manager cert info | grep Fingerprint
# Verify the connection
pvesm status
Terminal window
# On PBS: Create an API token for the backup user
proxmox-backup-manager user generate-token backupuser@pbs pve-backup
# Output:
# Value: backupuser@pbs!pve-backup:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
# On PVE: Use the token for storage configuration
pvesm add pbs pbs-primary \
--server 10.10.50.10 \
--datastore ds1 \
--username backupuser@pbs!pve-backup \
--password "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" \
--fingerprint <SHA256-fingerprint>

Setting Encryption Key

Terminal window
# Generate an encryption key on the PVE node
proxmox-backup-client key create /etc/pve/priv/pbs-encryption-key.json
# CRITICAL: Back up this key securely. Without it, backups cannot be restored.
# Store a copy in a password manager, safe, or HSM.
# Add the encryption key to the PBS storage in PVE
pvesm set pbs-primary --encryption-key /etc/pve/priv/pbs-encryption-key.json

VM and Container Backup Procedures

Backup from Proxmox VE (GUI or CLI)

Terminal window
# Backup a single VM
vzdump 100 --storage pbs-primary --mode snapshot --compress zstd --notes-template '{{guestname}}'
# Backup a container
vzdump 200 --storage pbs-primary --mode snapshot --compress zstd
# Backup multiple VMs
vzdump 100,101,102,200 --storage pbs-primary --mode snapshot --compress zstd
# Backup all VMs and containers on a node
vzdump --all --storage pbs-primary --mode snapshot --compress zstd \
--mailnotification always --mailto admin@example.com

Backup Modes

ModeDescriptionDowntimeUse Case
snapshotLive backup using QEMU dirty bitmapNoneProduction VMs
suspendSuspend VM, backup, resumeBriefLegacy guests
stopStop VM, backup, startFullMaximum consistency

Using proxmox-backup-client Directly

For backing up physical hosts or non-PVE systems:

Terminal window
# Install the client on a Debian/Ubuntu host
apt install proxmox-backup-client -y
# Set environment variables
export PBS_REPOSITORY="backupuser@pbs@10.10.50.10:ds1"
export PBS_PASSWORD="<password>"
export PBS_FINGERPRINT="<SHA256-fingerprint>"
# Backup specific directories
proxmox-backup-client backup \
root.pxar:/ \
--exclude /dev \
--exclude /proc \
--exclude /sys \
--exclude /tmp \
--exclude /run \
--exclude /mnt \
--exclude /media \
--exclude /var/cache
# Backup a raw disk image
proxmox-backup-client backup \
disk0.img:/dev/sda
# List existing backups
proxmox-backup-client list
# List files inside a backup
proxmox-backup-client catalog dump host/myserver/2026-02-11T10:00:00Z

Restore Procedures

Terminal window
# Restore a VM from PBS (on PVE node)
qmrestore pbs-primary:backup/vm/100/2026-02-11T10:00:00Z 100
# Restore to a different VMID
qmrestore pbs-primary:backup/vm/100/2026-02-11T10:00:00Z 999
# Restore a container
pct restore 200 pbs-primary:backup/ct/200/2026-02-11T10:00:00Z
# Restore individual files from a host backup
proxmox-backup-client restore host/myserver/2026-02-11T10:00:00Z \
root.pxar /target-restore-dir/
# Mount a backup for file-level browsing
proxmox-backup-client mount host/myserver/2026-02-11T10:00:00Z \
root.pxar /mnt/restore --repository backupuser@pbs@10.10.50.10:ds1
# Browse and copy individual files
ls /mnt/restore/etc/nginx/
cp /mnt/restore/etc/nginx/nginx.conf /etc/nginx/nginx.conf.restored
# Unmount when done
umount /mnt/restore

Deduplication and Compression

How Chunk-Based Deduplication Works

PBS splits backup data into variable-length chunks (typically 64 KiB to 4 MiB) using a rolling hash algorithm (similar to rsync). Each chunk is identified by its SHA-256 hash and stored exactly once in the chunk store.

Backup Flow:
Chunk Store
VM Disk Image (.chunks/)
+------------------+ +-----------+ +-------+ +----------+
| Block A (changed)|---->| Chunker |---->|Hash A'|-->| Store A' | (new)
| Block B (same) |---->| (rolling |---->|Hash B |-->| Exists! | (dedup)
| Block C (same) |---->| hash) |---->|Hash C |-->| Exists! | (dedup)
| Block D (changed)|---->| |---->|Hash D'|-->| Store D' | (new)
+------------------+ +-----------+ +-------+ +----------+
Result: Only 2 new chunks stored instead of 4.
Typical dedup ratio: 5:1 to 20:1 for daily VM backups.

Checking Deduplication Efficiency

Terminal window
# View datastore status including dedup ratio
proxmox-backup-manager datastore show ds1
# Output example:
# Name: ds1
# Path: /backup-pool/datastore1
# Chunk count: 1,847,293
# Total data: 12.4 TiB (logical)
# Disk usage: 1.8 TiB (physical)
# Dedup factor: 6.89
# Detailed chunk statistics
proxmox-backup-manager datastore show ds1 --output-format json-pretty

Compression Settings

PBS uses Zstandard (zstd) compression by default, applied per-chunk before storage:

/etc/vzdump.conf
# On PVE, configure compression for vzdump
compress: zstd
zstd: 3 # Compression level (1=fast, 19=max, default=3)
# For proxmox-backup-client
proxmox-backup-client backup root.pxar:/ \
--compress zstd \
--zstd-level 3

Storage Savings Example

ScenarioLogical DataPhysical StorageDedup RatioSavings
10 similar VMs, 7 daily backups14 TiB1.2 TiB11.7:191%
50 mixed VMs, 30-day retention150 TiB8.5 TiB17.6:194%
100 VMs + 200 CTs, GFS policy400 TiB18 TiB22.2:196%

Client-Side Encryption

PBS encrypts data on the client before transmission. The server never sees unencrypted data.

Encryption Architecture

Client (PVE Node) PBS Server
+-------------------+ +-------------------+
| VM Disk Data | | |
| | | | |
| [Chunker] | | |
| | | | |
| [Compress] | | |
| | | | |
| [AES-256-GCM] | -- HTTPS --> | [Store as-is] |
| (client key) | | (encrypted blobs) |
+-------------------+ +-------------------+
Server CANNOT decrypt data.
Key loss = Data loss. No exceptions.

Key Management

Terminal window
# Generate a new encryption key
proxmox-backup-client key create /root/pbs-key.json
# Enter and confirm the key password when prompted
# Show key details
proxmox-backup-client key show /root/pbs-key.json
# Create a paper backup of the key (QR code for printing)
proxmox-backup-client key paperkey /root/pbs-key.json
# CRITICAL: Store key backups in multiple locations:
# 1. Password manager (e.g., Vaultwarden, 1Password)
# 2. Printed paper key in a physical safe
# 3. USB drive in a separate secure location
# 4. HSM or secrets manager for automated recovery
# Change the key password
proxmox-backup-client key change-passphrase /root/pbs-key.json
# Create a master key for emergency recovery
proxmox-backup-client key create-master-key
# This generates master-public.pem and master-private.pem
# Store master-private.pem OFFLINE in a safe

Using the Master Key

Terminal window
# Configure PBS to use a master public key
# Each encrypted backup will include a copy of the encryption key
# encrypted with the master public key
# On PVE: set the master key
pvesm set pbs-primary --master-pubkey /etc/pve/priv/master-public.pem
# Emergency restore with master key (when original key is lost)
proxmox-backup-client key recover-master \
--master-keyfile master-private.pem \
--encrypted-keyfile backup-key-encrypted.blob \
--output recovered-key.json

Backup Scheduling and Retention Policies

Configuring Backup Jobs on PVE

Terminal window
# /etc/pve/jobs.cfg - Backup job configuration
job: backup-critical
schedule daily 02:00
storage pbs-primary
mode snapshot
compress zstd
vmid 100,101,102,103
notes-template {{guestname}} - {{cluster}}
mailnotification failure
mailto admin@example.com
enabled 1
job: backup-all
schedule daily 04:00
storage pbs-primary
mode snapshot
compress zstd
all 1
exclude 9000,9001
notes-template {{guestname}}
mailnotification always
mailto admin@example.com
enabled 1

GFS Retention Policy (Grandfather-Father-Son)

Terminal window
# Configure GFS retention on the datastore
proxmox-backup-manager datastore update ds1 \
--keep-last 3 \
--keep-daily 7 \
--keep-weekly 4 \
--keep-monthly 6 \
--keep-yearly 2
# Schedule pruning
proxmox-backup-manager datastore update ds1 \
--prune-schedule "daily 06:00"

GFS retention visualization:

Timeline (days ago): 0 1 2 3 4 5 6 7 14 21 28 60 90 120 365 730
| | | | | | | | | | | | | | | |
keep-last 3: * * *
keep-daily 7: * * * * *
keep-weekly 4: * * *
keep-monthly 6: * * *
keep-yearly 2: * *
Total retained: ~25 snapshots covering 2 years of history
Storage cost: Minimal due to deduplication

Manual Pruning

Terminal window
# Dry-run to see what would be pruned
proxmox-backup-client prune --dry-run \
--keep-daily 7 --keep-weekly 4 --keep-monthly 6 \
--repository backupuser@pbs@10.10.50.10:ds1
# Execute pruning
proxmox-backup-client prune \
--keep-daily 7 --keep-weekly 4 --keep-monthly 6 \
--repository backupuser@pbs@10.10.50.10:ds1

Garbage Collection

After pruning, chunks may become orphaned. Garbage collection reclaims this space:

Terminal window
# Run garbage collection manually
proxmox-backup-manager garbage-collection start ds1
# Check GC status
proxmox-backup-manager garbage-collection status ds1
# Schedule automatic GC
proxmox-backup-manager datastore update ds1 \
--gc-schedule "sat 02:00"
# IMPORTANT: GC should run AFTER prune jobs complete.
# Schedule prune at 00:00, GC at 02:00 to allow enough time.

Remote Sync and Offsite Backup

Configuring Remote Sync

Remote sync replicates a local datastore to a remote PBS instance for offsite protection.

Terminal window
# On the remote PBS: Create a sync user with limited permissions
proxmox-backup-manager user create syncuser@pbs \
--comment "Remote sync user"
proxmox-backup-manager acl update /datastore/offsite-ds1 \
DatastoreBackup --auth-id syncuser@pbs
# On the local PBS: Add the remote server
proxmox-backup-manager remote create offsite-pbs \
--host 203.0.113.50 \
--port 8007 \
--auth-id syncuser@pbs \
--password <password> \
--fingerprint <SHA256-fingerprint>
# Verify connectivity
proxmox-backup-manager remote list

Creating a Sync Job

Terminal window
# Create a sync job
proxmox-backup-manager sync-job create offsite-sync-ds1 \
--store ds1 \
--remote offsite-pbs \
--remote-store offsite-ds1 \
--schedule "daily 20:00" \
--remove-vanished true
# List sync jobs
proxmox-backup-manager sync-job list
# Run a sync job manually
proxmox-backup-manager sync-job run offsite-sync-ds1
# Monitor sync progress via the web UI or:
proxmox-backup-manager task list --typefilter syncjob
Security Architecture:
PULL (Recommended): PUSH (Less Secure):
Local PBS ----X---- Remote PBS Local PBS ---------> Remote PBS
Remote PBS ------> Local PBS If local is compromised,
(Remote pulls from local) attacker can destroy offsite
backups too.
With pull: compromised local With push: compromised local
cannot reach remote datastore. has write access to remote.
Terminal window
# On the remote PBS: Configure pull sync
# The REMOTE PBS pulls from the LOCAL PBS
proxmox-backup-manager remote create primary-pbs \
--host 10.10.50.10 \
--port 8007 \
--auth-id syncuser@pbs \
--password <password> \
--fingerprint <SHA256-fingerprint>
proxmox-backup-manager sync-job create pull-from-primary \
--store offsite-ds1 \
--remote primary-pbs \
--remote-store ds1 \
--schedule "daily 22:00" \
--remove-vanished false

Bandwidth Limiting

Terminal window
# Limit sync bandwidth to avoid saturating WAN links
proxmox-backup-manager sync-job update offsite-sync-ds1 \
--rate-in 100 # MiB/s inbound limit
--rate-out 50 # MiB/s outbound limit
# Per-interface traffic control (alternative)
tc qdisc add dev ens19 root tbf rate 500mbit burst 32kbit latency 400ms

Tape Backup Support

PBS supports LTO tape drives and libraries for long-term archival and air-gapped storage.

Tape Configuration

Terminal window
# List detected tape drives
proxmox-backup-manager tape drive list
# Configure a standalone tape drive
proxmox-backup-manager tape drive create lto-drive0 \
--path /dev/nst0 \
--changer-drivenum 0
# Configure a tape library (autoloader/changer)
proxmox-backup-manager tape changer create tape-lib0 \
--path /dev/sch0
# Associate drive with library
proxmox-backup-manager tape drive update lto-drive0 \
--changer tape-lib0
# List tape inventory
proxmox-backup-manager tape inventory

Media Pool Configuration

Terminal window
# Create a media pool
proxmox-backup-manager tape pool create daily-tapes \
--drive lto-drive0 \
--allocation-policy continue \
--retention 90d \
--encrypt true \
--comment "Daily backup tapes - 90 day retention"
# Create a media pool for yearly archival
proxmox-backup-manager tape pool create yearly-archive \
--drive lto-drive0 \
--retention 2555d \
--encrypt true \
--comment "Yearly archive - 7 year retention"

Tape Backup Jobs

Terminal window
# Create a tape backup job
proxmox-backup-manager tape backup-job create daily-to-tape \
--store ds1 \
--pool daily-tapes \
--drive lto-drive0 \
--schedule "daily 06:00" \
--latest-only true
# Run a tape backup manually
proxmox-backup-manager tape backup-job run daily-to-tape
# Restore from tape
proxmox-backup-manager tape restore \
--store ds1 \
--drive lto-drive0 \
--media-set <media-set-uuid>

Tape Rotation Strategy

+-------------------------------------------------------------------+
| Tape Rotation (LTO-8) |
+-------------------------------------------------------------------+
| Day | Tape Label | Pool | Retention | Location |
|---------|------------|----------------|-----------|---------------|
| Mon-Thu | D01-D04 | daily-tapes | 90 days | On-site |
| Friday | W01-W04 | weekly-tapes | 6 months | Off-site |
| Month | M01-M12 | monthly-tapes | 2 years | Off-site vault|
| Year | Y01-Y07 | yearly-archive | 7 years | Secure vault |
+-------------------------------------------------------------------+
LTO-8 capacity: 12 TB native / 30 TB compressed per tape
100 TB backup footprint: ~4-8 tapes per full set

Verification and Consistency Checks

Backup Verification Jobs

Terminal window
# Create a verification job
proxmox-backup-manager verify-job create verify-ds1 \
--store ds1 \
--schedule "weekly sat 08:00" \
--ignore-verified true \
--outdated-after 30
# Run verification manually
proxmox-backup-manager verify-job run verify-ds1
# Verify a specific backup snapshot
proxmox-backup-client verify \
vm/100/2026-02-11T10:00:00Z \
--repository backupuser@pbs@10.10.50.10:ds1
# Check the verification status
proxmox-backup-manager task list --typefilter verificationjob

Verify New Backups Immediately

Terminal window
# Enable automatic verification of new backups
proxmox-backup-manager datastore update ds1 --verify-new true
# This verifies every backup immediately after completion
# Impact: Increased I/O during backup window but guaranteed integrity

What Verification Checks

CheckDescription
Chunk existenceAll referenced chunks exist in the store
Chunk integritySHA-256 hash matches stored hash
Chunk decryptionEncrypted chunks can be decrypted (if key available)
Index consistencyIndex files reference valid chunk sequences
Manifest validationBackup manifest is complete and valid

Monitoring and Notifications

Email Notifications

/etc/proxmox-backup/notifications.cfg
sendmail: default-smtp
mailto admin@example.com
mailto-user root@pam
from-address pbs@example.com
author PBS-Primary
comment Default notification target
# Or use an SMTP relay
# /etc/proxmox-backup/notifications.cfg
smtp: corp-relay
mailto admin@example.com
from-address pbs@example.com
server smtp.example.com
port 587
mode starttls
username pbs-notifications
comment Corporate SMTP relay

Datastore Usage Alerts

Terminal window
# Configure datastore usage thresholds via the API
curl -s -k -X PUT \
-H "Authorization: PBSAPIToken=monitor@pbs!token:xxxx-xxxx" \
"https://10.10.50.10:8007/api2/json/config/datastore/ds1" \
-d 'notify=always' \
-d 'notify-level=warning'

Monitoring with Prometheus and Grafana

Terminal window
# PBS exposes metrics at /api2/json/status/metrics
# Configure a Prometheus scrape job
# /etc/prometheus/prometheus.yml (on your monitoring server)
scrape_configs:
- job_name: 'pbs'
scheme: https
tls_config:
insecure_skip_verify: true
bearer_token: '<API-token-value>'
metrics_path: '/api2/json/status/metrics'
static_configs:
- targets: ['10.10.50.10:8007']
labels:
instance: 'pbs-primary'

Key Metrics to Monitor

MetricWarning ThresholdCritical Threshold
Datastore usage75%90%
Backup job failuresAny failure2+ consecutive
Verification failuresAny failureImmediate alert
Sync job delay> 24 hours late> 48 hours late
GC duration> 4 hours> 12 hours
Chunk store growth rate> 5% per day> 10% per day

Custom Monitoring Script

/usr/local/bin/pbs-monitor.sh
#!/bin/bash
# Monitor PBS health and alert on issues
PBS_HOST="https://localhost:8007"
API_TOKEN="monitor@pbs!token:xxxx-xxxx-xxxx"
ALERT_EMAIL="admin@example.com"
# Check datastore usage
usage=$(curl -s -k \
-H "Authorization: PBSAPIToken=${API_TOKEN}" \
"${PBS_HOST}/api2/json/status/datastore-usage" | \
jq -r '.data[] | select(.store == "ds1") | .used * 100 / .total' | \
cut -d. -f1)
if [ "$usage" -gt 90 ]; then
echo "CRITICAL: PBS datastore ds1 at ${usage}% capacity" | \
mail -s "[PBS] Datastore Critical" "$ALERT_EMAIL"
elif [ "$usage" -gt 75 ]; then
echo "WARNING: PBS datastore ds1 at ${usage}% capacity" | \
mail -s "[PBS] Datastore Warning" "$ALERT_EMAIL"
fi
# Check for failed tasks in last 24 hours
failed=$(curl -s -k \
-H "Authorization: PBSAPIToken=${API_TOKEN}" \
"${PBS_HOST}/api2/json/nodes/localhost/tasks" | \
jq '[.data[] | select(.status == "error" and
(.starttime > (now - 86400)))] | length')
if [ "$failed" -gt 0 ]; then
echo "ALERT: ${failed} failed PBS tasks in last 24 hours" | \
mail -s "[PBS] Failed Tasks Alert" "$ALERT_EMAIL"
fi
Terminal window
# Add to crontab
echo "*/30 * * * * /usr/local/bin/pbs-monitor.sh" | crontab -

REST API Usage

PBS provides a comprehensive REST API for automation and integration.

Authentication

Terminal window
# Get an authentication ticket
curl -s -k -X POST \
"https://10.10.50.10:8007/api2/json/access/ticket" \
-d 'username=admin@pbs' \
-d 'password=<password>' | jq .
# Use API tokens (preferred for automation)
export PBS_API="https://10.10.50.10:8007/api2/json"
export PBS_TOKEN="PBSAPIToken=automation@pbs!cicd:xxxx-xxxx-xxxx"
# List datastores
curl -s -k -H "Authorization: ${PBS_TOKEN}" \
"${PBS_API}/config/datastore" | jq .
# List backups in a datastore
curl -s -k -H "Authorization: ${PBS_TOKEN}" \
"${PBS_API}/admin/datastore/ds1/snapshots" | jq .
# Get datastore status
curl -s -k -H "Authorization: ${PBS_TOKEN}" \
"${PBS_API}/admin/datastore/ds1/status" | jq .

Common API Operations

Terminal window
# Start garbage collection
curl -s -k -X POST \
-H "Authorization: ${PBS_TOKEN}" \
"${PBS_API}/admin/datastore/ds1/gc" | jq .
# Start verification
curl -s -k -X POST \
-H "Authorization: ${PBS_TOKEN}" \
"${PBS_API}/admin/datastore/ds1/verify" | jq .
# Prune a backup group
curl -s -k -X POST \
-H "Authorization: ${PBS_TOKEN}" \
"${PBS_API}/admin/datastore/ds1/prune" \
-d 'backup-type=vm' \
-d 'backup-id=100' \
-d 'keep-daily=7' \
-d 'keep-weekly=4' | jq .
# Delete a specific snapshot
curl -s -k -X DELETE \
-H "Authorization: ${PBS_TOKEN}" \
"${PBS_API}/admin/datastore/ds1/snapshots?backup-type=vm&backup-id=100&backup-time=1707638400" | jq .
# List running tasks
curl -s -k -H "Authorization: ${PBS_TOKEN}" \
"${PBS_API}/nodes/localhost/tasks?running=true" | jq .
# Get task log
curl -s -k -H "Authorization: ${PBS_TOKEN}" \
"${PBS_API}/nodes/localhost/tasks/<UPID>/log" | jq .

Automating Backup Reports

/usr/local/bin/pbs-report.sh
#!/bin/bash
# Generate a daily backup report
PBS_API="https://localhost:8007/api2/json"
PBS_TOKEN="PBSAPIToken=report@pbs!reporter:xxxx-xxxx-xxxx"
echo "=== PBS Daily Backup Report ==="
echo "Generated: $(date)"
echo ""
# Datastore status
echo "--- Datastore Status ---"
curl -s -k -H "Authorization: ${PBS_TOKEN}" \
"${PBS_API}/status/datastore-usage" | \
jq -r '.data[] | "Store: \(.store) | Used: \(.used | . / 1073741824 | floor)GB / \(.total | . / 1073741824 | floor)GB | Dedup: \((.dedup // 1) | . * 100 | floor / 100)"'
echo ""
# Recent backup jobs (last 24h)
echo "--- Backup Jobs (Last 24h) ---"
curl -s -k -H "Authorization: ${PBS_TOKEN}" \
"${PBS_API}/nodes/localhost/tasks?typefilter=backup&limit=50" | \
jq -r '.data[] | select(.starttime > (now - 86400)) |
"\(.starttime | strftime("%Y-%m-%d %H:%M")) | \(.status) | \(.worker_id)"'
echo ""
# Failed tasks
echo "--- Failed Tasks ---"
curl -s -k -H "Authorization: ${PBS_TOKEN}" \
"${PBS_API}/nodes/localhost/tasks?statusfilter=error&limit=20" | \
jq -r '.data[] | select(.starttime > (now - 86400)) |
"\(.starttime | strftime("%Y-%m-%d %H:%M")) | \(.worker_type) | \(.worker_id) | \(.status)"'

Disaster Recovery Procedures

Full PBS Server Recovery

Terminal window
# Scenario: PBS server hardware failure, datastores on separate storage
# 1. Install a new PBS server (same version or newer)
# 2. Mount the original datastore filesystem
mount /dev/sda1 /mnt/recovered-datastore
# 3. Register the datastore
proxmox-backup-manager datastore create ds1 /mnt/recovered-datastore
# 4. PBS will automatically recognize the existing data
# 5. Verify datastore integrity
proxmox-backup-manager verify-job create recovery-verify \
--store ds1 \
--schedule "now"
# 6. Restore VMs as needed
# On PVE: add the recovered PBS and restore

Recovering from Offsite Sync

Terminal window
# Scenario: Primary site destroyed, need to restore from offsite PBS
# 1. On the offsite PBS, verify backup availability
proxmox-backup-client list \
--repository syncuser@pbs@offsite-pbs:offsite-ds1
# 2. Set up a new PVE environment
# 3. Add the offsite PBS as storage
pvesm add pbs pbs-recovery \
--server offsite-pbs \
--datastore offsite-ds1 \
--username syncuser@pbs \
--password <password> \
--fingerprint <fingerprint>
# 4. If backups are encrypted, ensure you have the encryption key
pvesm set pbs-recovery --encryption-key /path/to/recovered-key.json
# 5. Restore critical VMs first
qmrestore pbs-recovery:backup/vm/100/latest 100
qmrestore pbs-recovery:backup/vm/101/latest 101
# 6. Restore containers
pct restore 200 pbs-recovery:backup/ct/200/latest

Recovering from Tape

Terminal window
# 1. Load the required tape
proxmox-backup-manager tape load-media --drive lto-drive0 --label-text D01
# 2. Inventory the tape contents
proxmox-backup-manager tape media content --media D01
# 3. Restore to a datastore
proxmox-backup-manager tape restore \
--store ds1 \
--drive lto-drive0 \
--media-set <media-set-uuid>
# 4. Proceed with normal VM/CT restore from the datastore

Disaster Recovery Testing Checklist

+-------------------------------------------------------------------+
| DR Test Procedure (Quarterly) |
+-------------------------------------------------------------------+
| Step | Action | Pass/Fail | Notes |
|------|-------------------------------------|-----------|----------|
| 1 | Verify offsite sync is current | | |
| 2 | Test encryption key accessibility | | |
| 3 | Restore 1 VM from local PBS | | |
| 4 | Restore 1 VM from offsite PBS | | |
| 5 | Restore 1 CT from local PBS | | |
| 6 | Verify restored VM functionality | | |
| 7 | Test file-level restore | | |
| 8 | Test tape restore (if applicable) | | |
| 9 | Measure RTO (actual vs. target) | | |
| 10 | Document any issues found | | |
+-------------------------------------------------------------------+
| Target RTO: ______ | Actual RTO: ______ | Test Date: ________ |
+-------------------------------------------------------------------+

Performance Tuning

ZFS Tuning for PBS Datastores

Terminal window
# Optimal ZFS settings for PBS chunk stores
zfs set compression=off backup-pool/datastore1 # PBS handles compression
zfs set atime=off backup-pool/datastore1 # Reduce metadata writes
zfs set recordsize=64k backup-pool/datastore1 # Match PBS chunk alignment
zfs set primarycache=all backup-pool/datastore1 # Use ARC for reads
zfs set secondarycache=all backup-pool/datastore1 # Use L2ARC if available
zfs set sync=disabled backup-pool/datastore1 # Only if UPS-protected
# Add SSD as ZFS SLOG (ZIL) for write performance
zpool add backup-pool log mirror /dev/nvme0n1p1 /dev/nvme1n1p1
# Add SSD as L2ARC for read cache
zpool add backup-pool cache /dev/nvme2n1
# Monitor ZFS ARC hit rate
arc_summary | grep "ARC size"

Network Tuning

/etc/sysctl.d/99-pbs-network.conf
# Increase TCP buffer sizes for high-throughput backup traffic
net.core.rmem_max = 67108864
net.core.wmem_max = 67108864
net.ipv4.tcp_rmem = 4096 1048576 67108864
net.ipv4.tcp_wmem = 4096 1048576 67108864
# Enable TCP window scaling
net.ipv4.tcp_window_scaling = 1
# Increase socket backlog
net.core.somaxconn = 4096
net.core.netdev_max_backlog = 5000
# Enable jumbo frames on backup network interface
ip link set ens19 mtu 9000
# Apply settings
sysctl --system

I/O Scheduler and Disk Tuning

Terminal window
# For HDD-based datastores: use mq-deadline
echo mq-deadline > /sys/block/sda/queue/scheduler
# For SSD/NVMe: use none (passthrough)
echo none > /sys/block/nvme0n1/queue/scheduler
# Increase readahead for sequential backup writes
blockdev --setrahead 8192 /dev/sda
# Persist via udev rules
# /etc/udev/rules.d/60-pbs-io.rules
ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/scheduler}="mq-deadline"
ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/read_ahead_kb}="8192"
ACTION=="add|change", KERNEL=="nvme*", ATTR{queue/scheduler}="none"

Parallel Backup Workers

Terminal window
# Limit concurrent backup jobs to prevent I/O saturation
# Adjust based on your storage backend performance
# /etc/vzdump.conf (on PVE nodes)
maxfiles: 1
bwlimit: 0
ionice: 7
lockwait: 5
stopwait: 10
tmpdir: /var/tmp
maxworkers: 4 # Max parallel vzdump workers

Performance Benchmarks

ConfigurationBackup SpeedRestore SpeedNotes
1G NIC, HDD RAID680-100 MB/s100-120 MB/sEntry level
10G NIC, HDD RAID10300-500 MB/s400-600 MB/sStandard production
10G NIC, SSD array800-1000 MB/s900-1100 MB/sHigh performance
25G NIC, NVMe pool1500-2000 MB/s1800-2200 MB/sMaximum throughput

Production Best Practices

Security Hardening

Terminal window
# 1. Use API tokens instead of passwords
proxmox-backup-manager user generate-token backupuser@pbs automation
# 2. Restrict API token permissions (least privilege)
proxmox-backup-manager acl update /datastore/ds1 \
DatastoreBackup --auth-id backupuser@pbs!automation
# 3. Enable two-factor authentication
proxmox-backup-manager user tfa add backupuser@pbs totp \
--description "TOTP for backup admin"
# 4. Firewall: Allow only backup traffic on dedicated VLAN
iptables -A INPUT -i ens19 -p tcp --dport 8007 -s 10.10.50.0/24 -j ACCEPT
iptables -A INPUT -i ens19 -p tcp --dport 8007 -j DROP
# 5. Disable root login via SSH
sed -i 's/^PermitRootLogin yes/PermitRootLogin no/' /etc/ssh/sshd_config
systemctl restart sshd
# 6. Regular security updates
apt update && apt upgrade -y

Backup Strategy Design

+-------------------------------------------------------------------+
| 3-2-1-1-0 Backup Strategy with PBS |
+-------------------------------------------------------------------+
| |
| 3 copies of data: |
| [Production] --> [PBS Local] --> [PBS Offsite] --> [Tape] |
| |
| 2 different media types: |
| [Disk (PBS)] + [Tape (LTO)] |
| |
| 1 offsite copy: |
| [Remote PBS in secondary datacenter] |
| |
| 1 air-gapped copy: |
| [Tape stored in secure vault] |
| |
| 0 errors: |
| [Automated verification + DR testing] |
| |
+-------------------------------------------------------------------+

Operational Procedures

/usr/local/bin/pbs-daily-check.sh
# Daily health check script
#!/bin/bash
echo "=== PBS Daily Health Check ==="
echo "Date: $(date)"
echo ""
# Check service status
echo "--- Service Status ---"
systemctl is-active proxmox-backup-proxy && echo "Proxy: OK" || echo "Proxy: FAILED"
systemctl is-active proxmox-backup && echo "Daemon: OK" || echo "Daemon: FAILED"
echo ""
# Check datastore usage
echo "--- Datastore Usage ---"
proxmox-backup-manager datastore list --output-format text
echo ""
# Check for failed tasks
echo "--- Recent Failures ---"
proxmox-backup-manager task list --limit 20 --output-format text | grep -i error
echo ""
# Check ZFS pool health
echo "--- ZFS Pool Status ---"
zpool status -x
echo ""
# Check disk health
echo "--- Disk Health ---"
smartctl -H /dev/sda | grep "SMART overall-health"
smartctl -H /dev/sdb | grep "SMART overall-health"
echo ""
echo "=== Check Complete ==="

Capacity Planning

Terminal window
# Estimate storage requirements
# Formula: (Total VM/CT data * retention multiplier) / dedup ratio
# Example calculation:
# - 50 VMs averaging 200 GB each = 10 TB total data
# - Daily backups with 30-day retention = 300 TB logical
# - Expected dedup ratio for similar VMs: 15:1
# - Required storage: 300 TB / 15 = 20 TB physical
# - Add 30% headroom: 26 TB
# Monitor growth trends
proxmox-backup-manager datastore show ds1 | grep -E "used|total|dedup"
# Set up alerts for capacity thresholds
# 75% = Plan expansion
# 85% = Order hardware
# 90% = Emergency -- reduce retention or add storage immediately

Upgrade Procedures

Terminal window
# Always snapshot the PBS VM/disk before upgrading
# If PBS runs as a VM:
qm snapshot <PBS-VMID> pre-upgrade --description "Before PBS upgrade"
# Upgrade PBS
apt update
apt list --upgradable | grep proxmox
apt upgrade -y
# Verify services restart correctly
systemctl status proxmox-backup-proxy
systemctl status proxmox-backup
# Verify datastore accessibility
proxmox-backup-manager datastore list
# Run a test backup after upgrade
vzdump <test-vm-id> --storage pbs-primary --mode snapshot

High Availability Considerations

PBS itself does not cluster, but you can achieve resilience through:

+-------------------------------------------------------------------+
| PBS High Availability |
+-------------------------------------------------------------------+
| |
| Option 1: Active-Passive with Shared Storage |
| +--------+ +---------+ +--------+ |
| | PBS-01 |<--->| Shared |<--->| PBS-02 | |
| | Active | | Storage | | Standby| |
| +--------+ +---------+ +--------+ |
| Failover via Pacemaker/Corosync |
| |
| Option 2: Active-Active with Cross-Sync |
| +--------+ Sync +--------+ |
| | PBS-01 |<------------->| PBS-02 | |
| | Site A | | Site B | |
| +--------+ +--------+ |
| PVE nodes backup to nearest PBS instance |
| Each PBS syncs to the other |
| |
| Option 3: PBS on Proxmox VE HA |
| Run PBS as a VM inside PVE cluster with HA enabled |
| Automatic failover if host node fails |
| |
+-------------------------------------------------------------------+

Documentation Checklist

Maintain these documents for your PBS deployment:

DocumentContentsUpdate Frequency
Backup PolicyRPO/RTO targets, retention rules, scopeAnnually
RunbookDaily procedures, troubleshooting stepsQuarterly
DR PlanRecovery procedures, contact info, prioritiesSemi-annually
Key Escrow LogEncryption key locations, access proceduresOn change
Capacity PlanGrowth projections, hardware refresh timelineQuarterly
Change LogAll PBS configuration changesOn change
Test ResultsDR test outcomes, lessons learnedAfter each test

Common Troubleshooting

Terminal window
# Backup job stuck or slow
proxmox-backup-manager task list --running
journalctl -u proxmox-backup -f
# Datastore shows "locked"
proxmox-backup-manager datastore show ds1
# Wait for running tasks or remove stale locks:
# (Only if confirmed no active operations)
rm /backup-pool/datastore1/.lock
# GC taking too long
# Check for large number of orphaned chunks
proxmox-backup-manager garbage-collection status ds1
# Consider running GC more frequently to prevent accumulation
# Network timeouts during backup
# Check MTU consistency across path
ping -M do -s 8972 10.10.50.10
# Verify firewall allows port 8007
# Verification errors
# Re-run verification with detailed output
proxmox-backup-client verify \
vm/100/2026-02-11T10:00:00Z \
--repository backupuser@pbs@10.10.50.10:ds1 \
--verbose
# Certificate issues
proxmox-backup-manager cert info
proxmox-backup-manager cert update

Summary

Proxmox Backup Server delivers enterprise-grade data protection without licensing costs. Its chunk-based deduplication, client-side encryption, and native PVE integration make it the natural choice for Proxmox environments. Combined with remote sync, tape support, and a comprehensive REST API, PBS provides the tools needed for a robust 3-2-1-1-0 backup strategy.

Key takeaways:


Looking to implement Proxmox Backup Server in your enterprise environment? At chavkov.com, we offer hands-on training covering PBS deployment, advanced configuration, disaster recovery planning, and production best practices. Our infrastructure training programs help teams build reliable, cost-effective backup solutions with confidence. Get in touch to discuss your data protection requirements.


Edit page
Share this post on:

Previous Post
Argo Workflows: Complete Kubernetes-Native Workflow Engine Guide
Next Post
ArgoCD: Complete GitOps Continuous Delivery Guide for Kubernetes