Skip to content
Vladimir Chavkov
Go back

K3s: Lightweight Kubernetes for Production

Edit page

K3s: Lightweight Kubernetes for Production

K3s is a highly available, certified Kubernetes distribution designed for resource-constrained environments, edge computing, and IoT. Built by Rancher Labs (now part of SUSE), K3s is packaged as a single binary under 100MB, making it perfect for edge, ARM devices, and production workloads.

What is K3s?

K3s is a lightweight Kubernetes distribution that removes optional features and uses lightweight components to reduce memory and disk footprint while maintaining full Kubernetes API compatibility.

Key Features

  1. Single Binary: All components in one < 100MB binary
  2. Low Resource Usage: Runs on 512MB RAM minimum
  3. ARM Support: Native ARM64 and ARMv7 support
  4. Edge Optimized: Perfect for edge/IoT deployments
  5. Easy Installation: One-line installer
  6. Built-in Components: Includes Traefik, CoreDNS, Flannel
  7. Simplified Operations: Single process for server and agent
  8. SQLite Backend: Default embedded database (etcd optional)
  9. Air-gapped Support: Fully offline installation
  10. CNCF Certified: 100% upstream Kubernetes compatibility

Architecture

K3s Server Node (Control Plane)
┌─────────────────────────────────────────────────────┐
│ K3s Server Process (Single Binary) │
│ │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐│
│ │ API Server │ │ Scheduler │ │ Controller ││
│ │ │ │ │ │ Manager ││
│ └─────────────┘ └─────────────┘ └─────────────┘│
│ │
│ ┌─────────────────────────────────────────────┐ │
│ │ Storage Backend │ │
│ │ • SQLite (default) │ │
│ │ • etcd (HA option) │ │
│ │ • MySQL/PostgreSQL (external) │ │
│ └─────────────────────────────────────────────┘ │
│ │
│ ┌─────────────────────────────────────────────┐ │
│ │ Built-in Components │ │
│ │ • Flannel CNI (default) │ │
│ │ • CoreDNS │ │
│ │ • Traefik Ingress │ │
│ │ • Service LB │ │
│ │ • Local Storage Provisioner │ │
│ │ • Helm Controller │ │
│ └─────────────────────────────────────────────┘ │
│ │
│ ┌─────────────────────────────────────────────┐ │
│ │ Kubelet + Container Runtime │ │
│ │ • containerd (built-in) │ │
│ └─────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────┘
K3s Agent Nodes (Workers)
┌─────────────────────────────────────────────────────┐
│ K3s Agent Process (Single Binary) │
│ │
│ ┌─────────────────────────────────────────────┐ │
│ │ Kubelet + Container Runtime │ │
│ │ • containerd (built-in) │ │
│ └─────────────────────────────────────────────┘ │
│ │
│ ┌─────────────────────────────────────────────┐ │
│ │ Flannel CNI │ │
│ └─────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────┘

Installation

Single Server Setup

Terminal window
# Install K3s server (quickstart)
curl -sfL https://get.k3s.io | sh -
# Check status
sudo systemctl status k3s
# Get kubeconfig
sudo cat /etc/rancher/k3s/k3s.yaml
# Get node token for agents
sudo cat /var/lib/rancher/k3s/server/node-token
# Use kubectl
sudo k3s kubectl get nodes
# Or copy kubeconfig
mkdir ~/.kube
sudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config
sudo chown $USER:$USER ~/.kube/config
export KUBECONFIG=~/.kube/config
kubectl get nodes

Custom Installation Options

Terminal window
# Install with specific options
curl -sfL https://get.k3s.io | \
INSTALL_K3S_EXEC="server \
--write-kubeconfig-mode=644 \
--disable traefik \
--disable servicelb \
--node-name k3s-master-1 \
--cluster-cidr=10.42.0.0/16 \
--service-cidr=10.43.0.0/16 \
--cluster-dns=10.43.0.10 \
--tls-san=k3s.example.com" sh -
# Install specific version
curl -sfL https://get.k3s.io | \
INSTALL_K3S_VERSION=v1.28.5+k3s1 sh -

Add Agent Nodes

Terminal window
# On agent nodes
curl -sfL https://get.k3s.io | \
K3S_URL=https://k3s-server:6443 \
K3S_TOKEN=<node-token> sh -
# With custom options
curl -sfL https://get.k3s.io | \
K3S_URL=https://k3s-server:6443 \
K3S_TOKEN=<node-token> \
INSTALL_K3S_EXEC="agent \
--node-name k3s-worker-1 \
--node-label environment=production \
--node-taint workload=heavy:NoSchedule" sh -

High Availability Setup

Terminal window
# First server (initializes cluster)
curl -sfL https://get.k3s.io | \
INSTALL_K3S_EXEC="server \
--cluster-init \
--write-kubeconfig-mode=644 \
--tls-san=k3s-lb.example.com \
--node-name k3s-server-1" sh -
# Get token
sudo cat /var/lib/rancher/k3s/server/node-token
# Additional servers (join cluster)
curl -sfL https://get.k3s.io | \
INSTALL_K3S_EXEC="server \
--server https://k3s-server-1:6443 \
--token <node-token> \
--write-kubeconfig-mode=644 \
--tls-san=k3s-lb.example.com \
--node-name k3s-server-2" sh -
curl -sfL https://get.k3s.io | \
INSTALL_K3S_EXEC="server \
--server https://k3s-server-1:6443 \
--token <node-token> \
--write-kubeconfig-mode=644 \
--tls-san=k3s-lb.example.com \
--node-name k3s-server-3" sh -

External Database (MySQL/PostgreSQL)

Terminal window
# Prepare database
CREATE DATABASE k3s;
CREATE USER 'k3s'@'%' IDENTIFIED BY 'secretpassword';
GRANT ALL ON k3s.* TO 'k3s'@'%';
FLUSH PRIVILEGES;
# Install K3s with external DB
curl -sfL https://get.k3s.io | \
INSTALL_K3S_EXEC="server \
--datastore-endpoint='mysql://k3s:secretpassword@tcp(mysql.example.com:3306)/k3s' \
--write-kubeconfig-mode=644 \
--tls-san=k3s-lb.example.com" sh -
# For PostgreSQL
# --datastore-endpoint='postgres://k3s:secretpassword@postgres.example.com:5432/k3s'

Load Balancer Configuration

# HAProxy configuration
frontend k3s_frontend
bind *:6443
mode tcp
option tcplog
default_backend k3s_backend
backend k3s_backend
mode tcp
balance roundrobin
option tcp-check
server k3s-server-1 10.0.1.11:6443 check fall 3 rise 2
server k3s-server-2 10.0.1.12:6443 check fall 3 rise 2
server k3s-server-3 10.0.1.13:6443 check fall 3 rise 2
# nginx configuration
stream {
upstream k3s_servers {
least_conn;
server 10.0.1.11:6443 max_fails=3 fail_timeout=5s;
server 10.0.1.12:6443 max_fails=3 fail_timeout=5s;
server 10.0.1.13:6443 max_fails=3 fail_timeout=5s;
}
server {
listen 6443;
proxy_pass k3s_servers;
}
}

Configuration

Server Config File

/etc/rancher/k3s/config.yaml
write-kubeconfig-mode: "0644"
tls-san:
- "k3s.example.com"
- "k3s-lb.example.com"
# Disable built-in components
disable:
- traefik
- servicelb
- local-storage
# Network configuration
cluster-cidr: "10.42.0.0/16"
service-cidr: "10.43.0.0/16"
cluster-dns: "10.43.0.10"
# Node configuration
node-name: "k3s-server-1"
node-label:
- "environment=production"
- "zone=us-east-1a"
# Security
protect-kernel-defaults: true
secrets-encryption: true
# etcd configuration
etcd-snapshot-schedule-cron: "0 */6 * * *"
etcd-snapshot-retention: 10
etcd-s3: true
etcd-s3-bucket: "k3s-backups"
etcd-s3-region: "us-east-1"

Agent Config File

# /etc/rancher/k3s/config.yaml (agent)
server: "https://k3s-lb.example.com:6443"
token: "<node-token>"
node-name: "k3s-worker-1"
node-label:
- "workload-type=general"
- "environment=production"
node-taint:
- "workload=gpu:NoSchedule"
# Kubelet configuration
kubelet-arg:
- "max-pods=250"
- "eviction-hard=memory.available<500Mi"
- "kube-reserved=cpu=200m,memory=512Mi"

Network Configuration

Alternative CNI (Calico)

Terminal window
# Install K3s without Flannel
curl -sfL https://get.k3s.io | \
INSTALL_K3S_EXEC="server \
--flannel-backend=none \
--disable-network-policy" sh -
# Install Calico
kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.27.0/manifests/calico.yaml
# Configure Calico
kubectl set env daemonset/calico-node -n kube-system IP_AUTODETECTION_METHOD=interface=eth0

MetalLB for LoadBalancer

Terminal window
# Install MetalLB
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/metallb-native.yaml
# Configure IP pool
cat <<EOF | kubectl apply -f -
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: default-pool
namespace: metallb-system
spec:
addresses:
- 192.168.1.240-192.168.1.250
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: default
namespace: metallb-system
spec:
ipAddressPools:
- default-pool
EOF

Storage

Longhorn Distributed Storage

Terminal window
# Install Longhorn
kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/v1.6.0/deploy/longhorn.yaml
# Check installation
kubectl -n longhorn-system get pods
# Access UI
kubectl -n longhorn-system port-forward service/longhorn-frontend 8080:80
# Create StorageClass
cat <<EOF | kubectl apply -f -
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: longhorn-retain
provisioner: driver.longhorn.io
allowVolumeExpansion: true
reclaimPolicy: Retain
parameters:
numberOfReplicas: "3"
staleReplicaTimeout: "2880"
fromBackup: ""
fsType: "ext4"
EOF

Local Path Provisioner

# Enhanced local-path configuration
apiVersion: v1
kind: ConfigMap
metadata:
name: local-path-config
namespace: local-path-storage
data:
config.json: |-
{
"nodePathMap":[
{
"node":"DEFAULT_PATH_FOR_NON_LISTED_NODES",
"paths":["/var/lib/rancher/k3s/storage"]
},
{
"node":"k3s-worker-ssd",
"paths":["/mnt/ssd/k3s-storage"]
}
]
}

Ingress

Traefik Configuration

/var/lib/rancher/k3s/server/manifests/traefik-config.yaml
apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
name: traefik
namespace: kube-system
spec:
valuesContent: |-
replicas: 3
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 512Mi
service:
spec:
externalTrafficPolicy: Local
logs:
general:
level: INFO
access:
enabled: true
ports:
web:
redirectTo: websecure
websecure:
tls:
enabled: true
ingressRoute:
dashboard:
enabled: true
matchRule: Host(`traefik.example.com`)
entryPoints: ["websecure"]

NGINX Ingress Controller

Terminal window
# Disable Traefik
curl -sfL https://get.k3s.io | \
INSTALL_K3S_EXEC="server --disable traefik" sh -
# Install NGINX Ingress
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.9.5/deploy/static/provider/cloud/deploy.yaml

Security

Pod Security Policies

apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: restricted
spec:
privileged: false
allowPrivilegeEscalation: false
requiredDropCapabilities:
- ALL
volumes:
- 'configMap'
- 'emptyDir'
- 'projected'
- 'secret'
- 'downwardAPI'
- 'persistentVolumeClaim'
hostNetwork: false
hostIPC: false
hostPID: false
runAsUser:
rule: 'MustRunAsNonRoot'
seLinux:
rule: 'RunAsAny'
fsGroup:
rule: 'RunAsAny'
readOnlyRootFilesystem: false

Network Policies

# Default deny all ingress
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-ingress
namespace: production
spec:
podSelector: {}
policyTypes:
- Ingress
---
# Allow specific traffic
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-frontend-to-backend
namespace: production
spec:
podSelector:
matchLabels:
app: backend
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: frontend
ports:
- protocol: TCP
port: 8080

Air-gapped Installation

Prepare Air-gap Environment

Terminal window
# Download K3s binary and images
wget https://github.com/k3s-io/k3s/releases/download/v1.28.5+k3s1/k3s
wget https://github.com/k3s-io/k3s/releases/download/v1.28.5+k3s1/k3s-airgap-images-amd64.tar.gz
# Transfer files to air-gapped system
scp k3s k3s-airgap-images-amd64.tar.gz user@airgap-server:/tmp/
# On air-gapped server
sudo mkdir -p /var/lib/rancher/k3s/agent/images/
sudo cp /tmp/k3s-airgap-images-amd64.tar.gz /var/lib/rancher/k3s/agent/images/
sudo cp /tmp/k3s /usr/local/bin/k3s
sudo chmod +x /usr/local/bin/k3s
# Install K3s
INSTALL_K3S_SKIP_DOWNLOAD=true \
INSTALL_K3S_EXEC="server --write-kubeconfig-mode=644" \
/usr/local/bin/k3s-install.sh

Private Registry Configuration

/etc/rancher/k3s/registries.yaml
mirrors:
docker.io:
endpoint:
- "https://registry.example.com"
registry.example.com:
endpoint:
- "https://registry.example.com"
configs:
"registry.example.com":
auth:
username: admin
password: secretpassword
tls:
cert_file: /path/to/cert.pem
key_file: /path/to/key.pem
ca_file: /path/to/ca.pem

Edge and IoT Deployments

Raspberry Pi Setup

Terminal window
# Enable cgroups (required)
sudo sed -i '$ s/$/ cgroup_enable=cpuset cgroup_enable=memory cgroup_memory=1/' /boot/cmdline.txt
sudo reboot
# Install K3s on Raspberry Pi
curl -sfL https://get.k3s.io | sh -
# Lightweight configuration
curl -sfL https://get.k3s.io | \
INSTALL_K3S_EXEC="server \
--disable traefik \
--disable servicelb \
--kubelet-arg=eviction-hard=memory.available<100Mi" sh -

ARM64 Installation

Terminal window
# K3s ARM64 installation
curl -sfL https://get.k3s.io | \
INSTALL_K3S_EXEC="--node-name arm-node-1" sh -
# Multi-architecture cluster (mix x86 and ARM)
# No special configuration needed - K3s handles automatically

Auto-deployment with Helm

/var/lib/rancher/k3s/server/manifests/myapp.yaml
apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
name: myapp
namespace: kube-system
spec:
repo: https://charts.example.com
chart: myapp
targetNamespace: production
valuesContent: |-
replicas: 3
image:
repository: myapp
tag: v1.0.0
resources:
requests:
cpu: 100m
memory: 128Mi

Backup and Restore

Etcd Snapshots

Terminal window
# Manual snapshot
sudo k3s etcd-snapshot save --name manual-backup
# List snapshots
sudo k3s etcd-snapshot ls
# Restore from snapshot
sudo systemctl stop k3s
sudo k3s server \
--cluster-reset \
--cluster-reset-restore-path=/var/lib/rancher/k3s/server/db/snapshots/manual-backup
sudo systemctl start k3s

S3 Backup Configuration

Terminal window
# Configure S3 backups
curl -sfL https://get.k3s.io | \
INSTALL_K3S_EXEC="server \
--etcd-snapshot-schedule-cron='0 */6 * * *' \
--etcd-snapshot-retention=10 \
--etcd-s3 \
--etcd-s3-bucket=k3s-backups \
--etcd-s3-region=us-east-1 \
--etcd-s3-access-key=<access-key> \
--etcd-s3-secret-key=<secret-key>" sh -
# Restore from S3
sudo k3s server \
--cluster-reset \
--cluster-reset-restore-path=s3://k3s-backups/snapshot-name \
--etcd-s3 \
--etcd-s3-bucket=k3s-backups \
--etcd-s3-region=us-east-1 \
--etcd-s3-access-key=<access-key> \
--etcd-s3-secret-key=<secret-key>

Monitoring

Prometheus + Grafana

Terminal window
# Install kube-prometheus-stack
kubectl create namespace monitoring
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
helm install prometheus prometheus-community/kube-prometheus-stack \
--namespace monitoring \
--set prometheus.prometheusSpec.retention=7d \
--set prometheus.prometheusSpec.storageSpec.volumeClaimTemplate.spec.resources.requests.storage=50Gi \
--set grafana.adminPassword=admin

Upgrades

Manual Upgrade

Terminal window
# Check current version
k3s --version
# Download new version
curl -sfL https://get.k3s.io | \
INSTALL_K3S_VERSION=v1.29.0+k3s1 sh -
# Upgrade with channel
curl -sfL https://get.k3s.io | \
INSTALL_K3S_CHANNEL=stable sh -

Automated Upgrades (System Upgrade Controller)

Terminal window
# Install system-upgrade-controller
kubectl apply -f https://github.com/rancher/system-upgrade-controller/releases/latest/download/system-upgrade-controller.yaml
# Create upgrade plan
cat <<EOF | kubectl apply -f -
apiVersion: upgrade.cattle.io/v1
kind: Plan
metadata:
name: k3s-server
namespace: system-upgrade
spec:
concurrency: 1
cordon: true
nodeSelector:
matchExpressions:
- key: node-role.kubernetes.io/control-plane
operator: In
values:
- "true"
serviceAccountName: system-upgrade
upgrade:
image: rancher/k3s-upgrade
version: v1.29.0+k3s1
---
apiVersion: upgrade.cattle.io/v1
kind: Plan
metadata:
name: k3s-agent
namespace: system-upgrade
spec:
concurrency: 2
cordon: true
nodeSelector:
matchExpressions:
- key: node-role.kubernetes.io/control-plane
operator: DoesNotExist
serviceAccountName: system-upgrade
upgrade:
image: rancher/k3s-upgrade
version: v1.29.0+k3s1
EOF

Best Practices

Resource Optimization

# Optimized kubelet configuration
kubelet-arg:
- "max-pods=110"
- "eviction-hard=memory.available<100Mi,nodefs.available<10%"
- "eviction-soft=memory.available<200Mi,nodefs.available<15%"
- "eviction-soft-grace-period=memory.available=2m,nodefs.available=2m"
- "kube-reserved=cpu=100m,memory=256Mi"
- "system-reserved=cpu=100m,memory=256Mi"
- "image-gc-high-threshold=85"
- "image-gc-low-threshold=80"

Production Checklist

  1. High Availability: 3+ server nodes with embedded etcd
  2. Load Balancer: HAProxy or nginx in front of servers
  3. Backups: Automated etcd snapshots to S3
  4. Monitoring: Prometheus + Grafana stack
  5. Logging: EFK or Loki stack
  6. Security: Network policies, PSPs, RBAC
  7. Updates: System Upgrade Controller for automated updates
  8. Storage: Longhorn for distributed persistent storage
  9. Ingress: MetalLB + NGINX/Traefik with TLS
  10. Disaster Recovery: Tested restore procedures

Troubleshooting

Terminal window
# Check K3s status
sudo systemctl status k3s
sudo journalctl -u k3s -f
# Check logs
sudo k3s kubectl logs -n kube-system <pod-name>
# Node issues
kubectl describe node <node-name>
kubectl get events --sort-by='.lastTimestamp'
# Network issues
sudo k3s crictl ps
sudo k3s crictl logs <container-id>
# Reset K3s completely
/usr/local/bin/k3s-uninstall.sh # Server
/usr/local/bin/k3s-agent-uninstall.sh # Agent
# Check flannel
kubectl -n kube-system logs -l app=flannel
# Restart K3s
sudo systemctl restart k3s

K3s vs K8s Comparison

FeatureK3sFull Kubernetes
Binary Size< 100MB~1GB
Memory Usage512MB minimum2GB+ minimum
InstallationOne-line scriptComplex multi-step
Default StorageSQLiteetcd
Built-in IngressTraefikNone
CNIFlannelNone (must install)
Edge/IoT SupportExcellentLimited
ARM SupportNativeLimited
Production ReadyYesYes
CNCF CertifiedYesYes

Conclusion

K3s is an excellent choice for edge computing, IoT, development environments, and even production workloads where simplicity and resource efficiency are priorities. Its single-binary architecture, low resource requirements, and full Kubernetes compatibility make it ideal for modern cloud-native deployments.


Master K3s and Kubernetes through our training programs. Contact us for edge computing and Kubernetes training.


Edit page
Share this post on:

Previous Post
LocalStack: Complete AWS Local Development and Testing Guide
Next Post
React Performance Optimization: Complete Guide for 2025