Skip to content
Vladimir Chavkov
Go back

OKD: Community Kubernetes Platform Complete Guide

Edit page

OKD: Community Kubernetes Platform Complete Guide

OKD (Origin Kubernetes Distribution) is the community-supported Kubernetes platform that serves as the upstream for Red Hat OpenShift. OKD provides the same enterprise features as OpenShift - including integrated CI/CD, developer workflows, operators, and security - without licensing costs. Built on Fedora CoreOS, OKD offers a production-ready platform for organizations seeking enterprise Kubernetes capabilities with open-source flexibility.

What is OKD?

OKD is the upstream, community-driven Kubernetes distribution that provides:

Key Features

  1. 100% Open Source: No licensing costs, full source access
  2. Enterprise Features: Same capabilities as OpenShift
  3. Integrated Developer Tools: S2I, BuildConfigs, DeploymentConfigs
  4. Operator Framework: Automated application lifecycle
  5. Built-in CI/CD: OpenShift Pipelines (Tekton)
  6. Service Mesh: Integrated Istio
  7. Monitoring Stack: Prometheus + Grafana
  8. Web Console: Full-featured dashboard
  9. Security: SELinux, SCCs, RBAC
  10. Fedora CoreOS: Immutable, auto-updating OS

OKD vs OpenShift vs Kubernetes

FeatureOKDOpenShiftKubernetes
CostFreeCommercialFree
SupportCommunityRed HatCommunity
Release CycleFasterStable/LTSRegular
Base OSFedora CoreOSRHEL CoreOSVarious
UpdatesRollingControlledManual
LifecycleCommunity-drivenEnterpriseCommunity
FeaturesFullFullCore only
StabilityBeta/StableProductionProduction
Best ForDev/Test, Cost-sensitiveEnterpriseDIY

Architecture

┌──────────────────────────────────────────────────────────────┐
│ OKD Control Plane │
│ │
│ ┌──────────────────────────────────────────────────────┐ │
│ │ Kubernetes Core Components │ │
│ │ │ │
│ │ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │ │
│ │ │ API Server │ │ Scheduler │ │ Controller │ │ │
│ │ │ │ │ │ │ Manager │ │ │
│ │ └─────────────┘ └─────────────┘ └─────────────┘ │ │
│ │ │ │
│ │ ┌──────────────────────────────────────────────┐ │ │
│ │ │ etcd Cluster │ │ │
│ │ └──────────────────────────────────────────────┘ │ │
│ └──────────────────────────────────────────────────────┘ │
│ │
│ ┌──────────────────────────────────────────────────────┐ │
│ │ OpenShift/OKD Platform Services │ │
│ │ │ │
│ │ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │ │
│ │ │ OAuth │ │ Image │ │ Router │ │ │
│ │ │ Server │ │ Registry │ │ (HAProxy) │ │ │
│ │ └─────────────┘ └─────────────┘ └─────────────┘ │ │
│ │ │ │
│ │ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │ │
│ │ │ Web │ │ Monitoring │ │ Logging │ │ │
│ │ │ Console │ │ (Prom/Graf) │ │ (EFK) │ │ │
│ │ └─────────────┘ └─────────────┘ └─────────────┘ │ │
│ │ │ │
│ │ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │ │
│ │ │ Pipelines │ │ Operators │ │ Service │ │ │
│ │ │ (Tekton) │ │ Framework │ │ Mesh │ │ │
│ │ └─────────────┘ └─────────────┘ └─────────────┘ │ │
│ └──────────────────────────────────────────────────────┘ │
└──────────────────────────────────────────────────────────────┘
┌────────────┴────────────┐
▼ ▼
┌───────────────────────────┐ ┌───────────────────────────┐
│ Worker Nodes │ │ Infrastructure Nodes │
│ (Fedora CoreOS) │ │ (Fedora CoreOS) │
│ │ │ │
│ • CRI-O Runtime │ │ • Image Registry │
│ • kubelet │ │ • Router/Ingress │
│ • Application Pods │ │ • Monitoring Stack │
│ • Storage CSI │ │ • Logging Stack │
└───────────────────────────┘ └───────────────────────────┘

Installation

Prerequisites

Minimum Requirements:

Recommended Production:

DNS Configuration

Terminal window
# Required DNS records for cluster "okd" in domain "example.com"
# API and API-INT (both point to load balancer)
api.okd.example.com. A 192.168.1.100
api-int.okd.example.com. A 192.168.1.100
# Wildcard for applications (points to load balancer)
*.apps.okd.example.com. A 192.168.1.101
# etcd nodes (control planes)
etcd-0.okd.example.com. A 192.168.1.11
etcd-1.okd.example.com. A 192.168.1.12
etcd-2.okd.example.com. A 192.168.1.13
# SRV records for etcd
_etcd-server-ssl._tcp.okd.example.com. 86400 IN SRV 0 10 2380 etcd-0.okd.example.com.
_etcd-server-ssl._tcp.okd.example.com. 86400 IN SRV 0 10 2380 etcd-1.okd.example.com.
_etcd-server-ssl._tcp.okd.example.com. 86400 IN SRV 0 10 2380 etcd-2.okd.example.com.
# Control plane nodes
master-0.okd.example.com. A 192.168.1.11
master-1.okd.example.com. A 192.168.1.12
master-2.okd.example.com. A 192.168.1.13
# Worker nodes
worker-0.okd.example.com. A 192.168.1.21
worker-1.okd.example.com. A 192.168.1.22
# Bootstrap (temporary)
bootstrap.okd.example.com. A 192.168.1.10
# Verify DNS
dig +short api.okd.example.com
dig +short *.apps.okd.example.com
dig +short _etcd-server-ssl._tcp.okd.example.com SRV

Load Balancer Configuration

# HAProxy configuration for OKD
global
log /dev/log local0
chroot /var/lib/haproxy
stats timeout 30s
user haproxy
group haproxy
daemon
maxconn 4096
defaults
log global
mode tcp
option tcplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
# API Server (6443)
frontend api_frontend
bind *:6443
mode tcp
default_backend api_backend
backend api_backend
mode tcp
balance roundrobin
server bootstrap 192.168.1.10:6443 check
server master-0 192.168.1.11:6443 check
server master-1 192.168.1.12:6443 check
server master-2 192.168.1.13:6443 check
# Machine Config Server (22623)
frontend machine_config_frontend
bind *:22623
mode tcp
default_backend machine_config_backend
backend machine_config_backend
mode tcp
balance roundrobin
server bootstrap 192.168.1.10:22623 check
server master-0 192.168.1.11:22623 check
server master-1 192.168.1.12:22623 check
server master-2 192.168.1.13:22623 check
# HTTP Ingress (80)
frontend http_frontend
bind *:80
mode tcp
default_backend http_backend
backend http_backend
mode tcp
balance roundrobin
server worker-0 192.168.1.21:80 check
server worker-1 192.168.1.22:80 check
# HTTPS Ingress (443)
frontend https_frontend
bind *:443
mode tcp
default_backend https_backend
backend https_backend
mode tcp
balance roundrobin
server worker-0 192.168.1.21:443 check
server worker-1 192.168.1.22:443 check

Installation Process

Terminal window
# Download OKD installer and CLI
VERSION=4.15.0-0.okd-2024-02-23-163410
wget https://github.com/okd-project/okd/releases/download/${VERSION}/openshift-install-linux-${VERSION}.tar.gz
wget https://github.com/okd-project/okd/releases/download/${VERSION}/openshift-client-linux-${VERSION}.tar.gz
# Extract
tar xvf openshift-install-linux-${VERSION}.tar.gz
tar xvf openshift-client-linux-${VERSION}.tar.gz
sudo mv openshift-install oc kubectl /usr/local/bin/
# Create install directory
mkdir ~/okd-install
cd ~/okd-install
# Generate SSH key for core user access
ssh-keygen -t ed25519 -N '' -f ~/.ssh/okd-key
# Get pull secret from https://console.redhat.com/openshift/downloads
# For OKD, create minimal pull secret:
cat > pull-secret.json << 'EOF'
{"auths":{"fake":{"auth":"aGVsbG86d29ybGQ="}}}
EOF
# Create install-config.yaml
cat > install-config.yaml << 'EOF'
apiVersion: v1
baseDomain: example.com
metadata:
name: okd
compute:
- name: worker
replicas: 2
platform: {}
controlPlane:
name: master
replicas: 3
platform: {}
networking:
networkType: OVNKubernetes
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
serviceNetwork:
- 172.30.0.0/16
machineNetwork:
- cidr: 192.168.1.0/24
platform:
none: {}
pullSecret: '{"auths":{"fake":{"auth":"aGVsbG86d29ybGQ="}}}'
sshKey: |
ssh-ed25519 AAAAC3...your-key-here...
fips: false
# Optional: Custom certificates
# additionalTrustBundle: |
# -----BEGIN CERTIFICATE-----
# ...
# -----END CERTIFICATE-----
EOF
# Backup config (installer consumes it)
cp install-config.yaml install-config.yaml.backup
# Generate ignition configs
openshift-install create ignition-configs --dir ~/okd-install
# Files generated:
# - bootstrap.ign
# - master.ign
# - worker.ign
# - metadata.json
# - auth/kubeconfig
# - auth/kubeadmin-password
# Host ignition files on web server
sudo mkdir -p /var/www/html/okd
sudo cp ~/okd-install/*.ign /var/www/html/okd/
sudo chmod 644 /var/www/html/okd/*.ign
# Start web server
sudo python3 -m http.server 8080 --directory /var/www/html

Boot Nodes

Terminal window
# For each node, boot from Fedora CoreOS live ISO with kernel parameters:
# Bootstrap node:
coreos.inst.install_dev=sda \
coreos.inst.image_url=https://builds.coreos.fedoraproject.org/prod/streams/stable/builds/latest/x86_64/fedora-coreos-metal.x86_64.raw.xz \
coreos.inst.ignition_url=http://192.168.1.1:8080/okd/bootstrap.ign \
ip=192.168.1.10::192.168.1.1:255.255.255.0:bootstrap.okd.example.com:ens192:none \
nameserver=192.168.1.1
# Control plane nodes (repeat for master-0, master-1, master-2):
coreos.inst.install_dev=sda \
coreos.inst.image_url=https://builds.coreos.fedoraproject.org/prod/streams/stable/builds/latest/x86_64/fedora-coreos-metal.x86_64.raw.xz \
coreos.inst.ignition_url=http://192.168.1.1:8080/okd/master.ign \
ip=192.168.1.11::192.168.1.1:255.255.255.0:master-0.okd.example.com:ens192:none \
nameserver=192.168.1.1
# Worker nodes:
coreos.inst.install_dev=sda \
coreos.inst.image_url=https://builds.coreos.fedoraproject.org/prod/streams/stable/builds/latest/x86_64/fedora-coreos-metal.x86_64.raw.xz \
coreos.inst.ignition_url=http://192.168.1.1:8080/okd/worker.ign \
ip=192.168.1.21::192.168.1.1:255.255.255.0:worker-0.okd.example.com:ens192:none \
nameserver=192.168.1.1
# Alternative: Use PXE boot
# See: https://docs.okd.io/latest/installing/installing_bare_metal/installing-bare-metal.html

Monitor Installation

Terminal window
# Wait for bootstrap to complete (30-45 minutes)
openshift-install wait-for bootstrap-complete \
--dir ~/okd-install \
--log-level=info
# Output: "INFO It is now safe to remove the bootstrap resources"
# Remove bootstrap from load balancer
# Shutdown bootstrap node
# Set kubeconfig
export KUBECONFIG=~/okd-install/auth/kubeconfig
# Approve worker CSRs
watch -n5 oc get csr
oc get csr -o name | xargs oc adm certificate approve
# Wait for installation complete (10-15 minutes)
openshift-install wait-for install-complete \
--dir ~/okd-install \
--log-level=info
# Output shows console URL and kubeadmin credentials

First Login

Terminal window
# Login via CLI
oc login -u kubeadmin -p <password-from-install>
# Get console URL
oc whoami --show-console
# Access web console
# https://console-openshift-console.apps.okd.example.com
# Username: kubeadmin
# Password: <from ~/okd-install/auth/kubeadmin-password>

Post-Installation Configuration

Configure OAuth (Identity Provider)

# HTPasswd provider
apiVersion: v1
kind: Secret
metadata:
name: htpass-secret
namespace: openshift-config
type: Opaque
data:
htpasswd: <base64-encoded-htpasswd-file>
---
apiVersion: config.openshift.io/v1
kind: OAuth
metadata:
name: cluster
spec:
identityProviders:
- name: my_htpasswd_provider
mappingMethod: claim
type: HTPasswd
htpasswd:
fileData:
name: htpass-secret
Terminal window
# Create htpasswd file
htpasswd -c -B -b users.htpasswd admin admin123
htpasswd -b users.htpasswd developer dev123
# Create secret
oc create secret generic htpass-secret \
--from-file=htpasswd=users.htpasswd \
-n openshift-config
# Apply OAuth config
oc apply -f oauth.yaml
# Grant cluster-admin to user
oc adm policy add-cluster-role-to-user cluster-admin admin
# Remove kubeadmin
oc delete secrets kubeadmin -n kube-system

Configure Image Registry

Terminal window
# For production with persistent storage
oc patch configs.imageregistry.operator.openshift.io cluster \
--type merge \
--patch '{"spec":{"managementState":"Managed","storage":{"pvc":{"claim":""}}}}'
# For testing (emptyDir - not persistent)
oc patch configs.imageregistry.operator.openshift.io cluster \
--type merge \
--patch '{"spec":{"managementState":"Managed","storage":{"emptyDir":{}}}}'
# Verify
oc get clusteroperator image-registry

Setup Persistent Storage

# Example: NFS StorageClass
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-storage
provisioner: nfs.csi.k8s.io
parameters:
server: nfs-server.example.com
share: /export/okd-storage
reclaimPolicy: Delete
volumeBindingMode: Immediate

Developer Workflows

Source-to-Image (S2I)

Terminal window
# Create new app from Git
oc new-project my-app
oc new-app nodejs:18~https://github.com/sclorg/nodejs-ex
# Expose route
oc expose svc/nodejs-ex
# Get route
oc get route nodejs-ex
# Trigger rebuild
oc start-build nodejs-ex
# Follow build logs
oc logs -f bc/nodejs-ex

OpenShift Pipelines (Tekton)

Terminal window
# Install Pipelines Operator
cat <<EOF | oc apply -f -
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: openshift-pipelines-operator
namespace: openshift-operators
spec:
channel: latest
name: openshift-pipelines-operator-rh
source: community-operators
sourceNamespace: openshift-marketplace
EOF
# Create pipeline
cat <<EOF | oc apply -f -
apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
name: build-and-deploy
spec:
params:
- name: git-url
type: string
- name: git-revision
type: string
default: main
workspaces:
- name: shared-workspace
tasks:
- name: fetch-repository
taskRef:
name: git-clone
kind: ClusterTask
workspaces:
- name: output
workspace: shared-workspace
params:
- name: url
value: $(params.git-url)
- name: revision
value: $(params.git-revision)
- name: build-image
taskRef:
name: buildah
kind: ClusterTask
runAfter:
- fetch-repository
workspaces:
- name: source
workspace: shared-workspace
params:
- name: IMAGE
value: image-registry.openshift-image-registry.svc:5000/my-app/app:latest
EOF

Operators

Install Operator via OperatorHub

apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: mongodb-enterprise
namespace: openshift-operators
spec:
channel: stable
name: mongodb-enterprise
source: community-operators
sourceNamespace: openshift-marketplace
installPlanApproval: Automatic

Custom Operator

Terminal window
# Install Operator SDK
export ARCH=$(case $(arch) in x86_64) echo -n amd64 ;; aarch64) echo -n arm64 ;; *) echo -n $(arch) ;; esac)
export OS=$(uname | awk '{print tolower($0)}')
export OPERATOR_SDK_DL_URL=https://github.com/operator-framework/operator-sdk/releases/download/v1.33.0
curl -LO ${OPERATOR_SDK_DL_URL}/operator-sdk_${OS}_${ARCH}
chmod +x operator-sdk_${OS}_${ARCH}
sudo mv operator-sdk_${OS}_${ARCH} /usr/local/bin/operator-sdk
# Create new operator
mkdir my-operator && cd my-operator
operator-sdk init --domain=example.com --repo=github.com/example/my-operator
operator-sdk create api --group=app --version=v1 --kind=MyApp --resource --controller

Monitoring and Observability

Access Prometheus

Terminal window
# Port-forward to Prometheus
oc port-forward -n openshift-monitoring \
svc/prometheus-k8s 9090:9090
# Access at http://localhost:9090

Custom Metrics

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: myapp-metrics
namespace: my-app
spec:
selector:
matchLabels:
app: myapp
endpoints:
- port: metrics
interval: 30s

Upgrades

Check Available Upgrades

Terminal window
# Check current version
oc get clusterversion
# View available upgrades
oc adm upgrade
# Upgrade to specific version
oc adm upgrade --to=4.15.0-0.okd-2024-03-10-010116
# Monitor upgrade
watch oc get clusterversion
oc get clusteroperators

Best Practices

Production Deployment

  1. Infrastructure Nodes: Dedicated nodes for router, registry, monitoring
  2. Storage: Persistent storage for registry and monitoring
  3. Monitoring: Configure alerting and log aggregation
  4. Backup: Regular etcd backups
  5. Updates: Test upgrades in non-production first

Security

  1. Remove kubeadmin: After creating admin users
  2. Network Policies: Restrict pod-to-pod communication
  3. SCCs: Use restrictive SCCs by default
  4. Secrets: External secrets management (Vault)
  5. RBAC: Least privilege access

Performance

  1. Worker Sizing: Right-size based on workload
  2. Storage: Fast storage for etcd
  3. Network: 10 Gbps recommended
  4. Monitoring: Resource limits on monitoring stack

Troubleshooting

Terminal window
# Check cluster operators
oc get co
# Check node status
oc get nodes
oc describe node <node-name>
# Check pod logs
oc logs -n <namespace> <pod-name>
# Debug node
oc debug node/<node-name>
# Collect must-gather
oc adm must-gather
# Check events
oc get events --sort-by='.lastTimestamp'

Conclusion

OKD provides enterprise Kubernetes capabilities without licensing costs, making it ideal for development, testing, and cost-sensitive production deployments. While lacking Red Hat’s commercial support, OKD’s active community and rapid innovation make it a compelling choice for teams seeking OpenShift features with open-source flexibility.


Master OKD and Kubernetes with our training programs. Contact us for customized cloud-native training.


Edit page
Share this post on:

Previous Post
Apache Kafka: Complete Event Streaming Platform Guide
Next Post
Velero: Complete Kubernetes Backup and Disaster Recovery Guide