Cluster Load Balancer
This section describes how to install an external load balancer in front of a High Availability (HA) K3s cluster's server nodes. Two examples are provided: Nginx and HAProxy.
External load-balancers should not be confused with the embedded ServiceLB, which is an embedded controller that allows for use of Kubernetes LoadBalancer Services without deploying a third-party load-balancer controller. For more details, see Service Load Balancer.
External load-balancers can be used to provide a fixed registration address for registering nodes, or for external access to the Kubernetes API Server. For exposing LoadBalancer Services, external load-balancers can be used alongside or instead of ServiceLB, but in most cases, replacement load-balancer controllers such as MetalLB or Kube-VIP are a better choice.
Prerequisites
All nodes in this example are running Ubuntu 20.04.
For both examples, assume that a HA K3s cluster with embedded etcd has been installed on 3 nodes.
Each k3s server is configured with:
# /etc/rancher/k3s/config.yaml
token: lb-cluster-gd
tls-san: 10.10.10.100
The nodes have hostnames and IPs of:
- server-1:
10.10.10.50 - server-2:
10.10.10.51 - server-3:
10.10.10.52
Two additional nodes for load balancing are configured with hostnames and IPs of:
- lb-1:
10.10.10.98 - lb-2:
10.10.10.99
Three additional nodes exist with hostnames and IPs of:
- agent-1:
10.10.10.101 - agent-2:
10.10.10.102 - agent-3:
10.10.10.103
Setup Load Balancer
- HAProxy
- Nginx
- Kube-VIP
HAProxy is an open source option that provides a TCP load balancer. It also supports HA for the load balancer itself, ensuring redundancy at all levels. See HAProxy Documentation for more info.
Additionally, we will use KeepAlived to generate a virtual IP (VIP) that will be used to access the cluster. See KeepAlived Documentation for more info.
- Install HAProxy and KeepAlived:
sudo apt-get install haproxy keepalived
- Add the following to
/etc/haproxy/haproxy.cfgon lb-1 and lb-2:
frontend k3s-frontend
bind *:6443
mode tcp
option tcplog
default_backend k3s-backend
backend k3s-backend
mode tcp
option tcp-check
balance roundrobin
default-server inter 10s downinter 5s
server server-1 10.10.10.50:6443 check
server server-2 10.10.10.51:6443 check
server server-3 10.10.10.52:6443 check
- Add the following to
/etc/keepalived/keepalived.confon lb-1 and lb-2:
global_defs {
enable_script_security
script_user root
}
vrrp_script chk_haproxy {
script 'killall -0 haproxy' # faster than pidof
interval 2
}
vrrp_instance haproxy-vip {
interface eth1
state <STATE> # MASTER on lb-1, BACKUP on lb-2
priority <PRIORITY> # 200 on lb-1, 100 on lb-2
virtual_router_id 51
virtual_ipaddress {
10.10.10.100/24
}
track_script {
chk_haproxy
}
}
- Restart HAProxy and KeepAlived on lb-1 and lb-2:
systemctl restart haproxy
systemctl restart keepalived
- On agent-1, agent-2, and agent-3, run the following command to install k3s and join the cluster:
curl -sfL https://get.k3s.io | K3S_TOKEN=lb-cluster-gd sh -s - agent --server https://10.10.10.100:6443
You can now use kubectl from server node to interact with the cluster.
root@server-1 $ k3s kubectl get nodes -A
NAME STATUS ROLES AGE VERSION
agent-1 Ready <none> 32s v1.27.3+k3s1
agent-2 Ready <none> 20s v1.27.3+k3s1
agent-3 Ready <none> 9s v1.27.3+k3s1
server-1 Ready control-plane,etcd,master 4m22s v1.27.3+k3s1
server-2 Ready control-plane,etcd,master 3m58s v1.27.3+k3s1
server-3 Ready control-plane,etcd,master 3m12s v1.27.3+k3s1
Nginx Load Balancer
Nginx does not natively support a High Availability (HA) configuration. If setting up an HA cluster, having a single load balancer in front of K3s will reintroduce a single point of failure.
Nginx Open Source provides a TCP load balancer. See Using nginx as HTTP load balancer for more info.
- Create a
nginx.conffile on lb-1 with the following contents:
events {}
stream {
upstream k3s_servers {
server 10.10.10.50:6443;
server 10.10.10.51:6443;
server 10.10.10.52:6443;
}
server {
listen 6443;
proxy_pass k3s_servers;
}
}
- Run the Nginx load balancer on lb-1:
Using docker:
docker run -d --restart unless-stopped \
-v ${PWD}/nginx.conf:/etc/nginx/nginx.conf \
-p 6443:6443 \
nginx:stable
Or install nginx and then run:
cp nginx.conf /etc/nginx/nginx.conf
systemctl start nginx
- On agent-1, agent-2, and agent-3, run the following command to install k3s and join the cluster:
curl -sfL https://get.k3s.io | K3S_TOKEN=lb-cluster-gd sh -s - agent --server https://10.10.10.98:6443
You can now use kubectl from server node to interact with the cluster.
root@server1 $ k3s kubectl get nodes -A
NAME STATUS ROLES AGE VERSION
agent-1 Ready <none> 30s v1.27.3+k3s1
agent-2 Ready <none> 22s v1.27.3+k3s1
agent-3 Ready <none> 13s v1.27.3+k3s1
server-1 Ready control-plane,etcd,master 4m49s v1.27.3+k3s1
server-2 Ready control-plane,etcd,master 3m58s v1.27.3+k3s1
server-3 Ready control-plane,etcd,master 3m16s v1.27.3+k3s1
Kube-VIP
This example configures kube-vip in ARP (layer‑2) mode to provide a Virtual IP (VIP) and a control-plane load balancer. The manifest below deploys kube-vip as a DaemonSet on control-plane nodes and announces the VIP on the node network. Adjust the interface and subnet to match your environment.
Kube-VIP provides a virtual IP and load balancer for the Kubernetes control plane and for Services of type LoadBalancer. The instructions below show how to generate and deploy the daemonset manifest on K3s control-plane nodes.
- Install the RBAC manifest:
curl -fsSL https://kube-vip.io/manifests/rbac.yaml -o /var/lib/rancher/k3s/server/manifests/kube-vip-rbac.yaml
or
kubectl apply -f https://kube-vip.io/manifests/rbac.yaml
- Deploy the kube-vip daemonset:
- Update these values before applying:
- vip_interface: the network interface name on each control-plane host (e.g. ens160, eth0).
- address: the VIP (example: 10.10.10.100).
- node affinity: ensure it matches your control-plane node labels (node-role.kubernetes.io/control-plane vs master).
- The list of environment variables is available in the documentation.
Apply the following manifest using the kubectl apply -f command.
apiVersion: apps/v1
kind: DaemonSet
metadata:
labels:
app.kubernetes.io/name: kube-vip-ds
app.kubernetes.io/version: v1.0.4
name: kube-vip-ds
namespace: kube-system
spec:
selector:
matchLabels:
app.kubernetes.io/name: kube-vip-ds
template:
metadata:
labels:
app.kubernetes.io/name: kube-vip-ds
app.kubernetes.io/version: v1.0.4
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: node-role.kubernetes.io/master
operator: Exists
- matchExpressions:
- key: node-role.kubernetes.io/control-plane
operator: Exists
containers:
- args:
- manager
env:
- name: vip_arp
value: "true"
- name: port
value: "6443"
- name: vip_nodename
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: vip_interface
value: ens160 # <- CHANGE to your host interface or omit
- name: vip_subnet
value: "32"
- name: cp_enable
value: "true"
- name: cp_namespace
value: kube-system
- name: vip_ddns
value: "false"
- name: vip_leaderelection
value: "true"
- name: vip_leaseduration
value: "5"
- name: vip_renewdeadline
value: "3"
- name: vip_retryperiod
value: "1"
- name: address
value: 10.10.10.100 # <- CHANGE to your VIP
image: ghcr.io/kube-vip/kube-vip:v1.0.4
imagePullPolicy: Always
name: kube-vip
resources: {}
securityContext:
capabilities:
add:
- NET_ADMIN
- NET_RAW
- SYS_TIME
hostNetwork: true
serviceAccountName: kube-vip
tolerations:
- effect: NoSchedule
operator: Exists
- effect: NoExecute
operator: Exists
updateStrategy: {}
- Verify kube-vip and VIP announcement
# check pods
kubectl -n kube-system get pods -l app.kubernetes.io/name=kube-vip-ds
# on a control-plane host, confirm the VIP is in the ARP/neighbor table
ip neigh show | grep 10.10.10.100
- TLS certificate note
If K3s was installed before the VIP was added to the API server certificate SANs, kubelets and API clients will not trust the server certificate for the VIP. To include the VIP in server certificates:
# Stop K3s service
systemctl stop k3s
# Rotate server certificates to include the configured tls-san/VIP
k3s certificate rotate
# Start K3s service
systemctl start k3s
- After rotation, verify API access using the VIP: kubectl --server=https://10.10.10.100:6443 get nodes