Distributed hybrid or multicloud cluster
A K3s cluster can still be deployed on nodes which do not share a common private network and are not directly connected (e.g. nodes in different public clouds). There are two options to achieve this: the embedded k3s multicloud solution and the integration with the tailscale
VPN provider.
The latency between nodes will increase as external connectivity requires more hops. This will reduce the network performance and could also impact the health of the cluster if latency is too high.
Embedded etcd is not supported in this type of deployment. If using embedded etcd, all server nodes must be reachable to each other via their private IPs. Agents may be distributed over multiple networks, but all servers should be in the same location.
Embedded k3s multicloud solution
K3s uses wireguard to establish a VPN mesh for cluster traffic. Nodes must each have a unique IP through which they can be reached (usually a public IP). K3s supervisor traffic will use a websocket tunnel, and cluster (CNI) traffic will use a wireguard tunnel.
To enable this type of deployment, you must add the following parameters on servers:
--node-external-ip=<SERVER_EXTERNAL_IP> --flannel-backend=wireguard-native --flannel-external-ip
and on agents:
--node-external-ip=<AGENT_EXTERNAL_IP>
where SERVER_EXTERNAL_IP
is the IP through which we can reach the server node and AGENT_EXTERNAL_IP
is the IP through which we can reach the agent node. Note that the K3S_URL
config parameter in the agent should use the SERVER_EXTERNAL_IP
to be able to connect to it. Remember to check the Networking Requirements and allow access to the listed ports on both internal and external addresses.
Both SERVER_EXTERNAL_IP
and AGENT_EXTERNAL_IP
must have connectivity between them and are normally public IPs.
If nodes are assigned dynamic IPs and the IP changes (e.g. in AWS), you must modify the --node-external-ip
parameter to reflect the new IP. If running K3s as a service, you must modify /etc/systemd/system/k3s.service
then run:
systemctl daemon-reload
systemctl restart k3s
Integration with the Tailscale VPN provider (experimental)
Available in v1.27.3, v1.26.6, v1.25.11 and newer.
K3s can integrate with Tailscale so that nodes use the Tailscale VPN service to build a mesh between nodes.
There are four steps to be done with Tailscale before deploying K3s:
-
Log in to your Tailscale account
-
In
Settings > Keys
, generate an auth key ($AUTH-KEY), which may be reusable for all nodes in your cluster -
Decide on the podCIDR the cluster will use (by default
10.42.0.0/16
). Append the CIDR (or CIDRs for dual-stack) in Access controls with the stanza:
"autoApprovers": {
"routes": {
"10.42.0.0/16": ["your_account@xyz.com"],
"2001:cafe:42::/56": ["your_account@xyz.com"],
},
},
- Install Tailscale in your nodes:
curl -fsSL https://tailscale.com/install.sh | sh
To deploy K3s with Tailscale integration enabled, you must add the following parameter on each of your nodes:
--vpn-auth="name=tailscale,joinKey=$AUTH-KEY
or provide that information in a file and use the parameter:
--vpn-auth-file=$PATH_TO_FILE
Optionally, if you have your own Tailscale server (e.g. headscale), you can connect to it by appending ,controlServerURL=$URL
to the vpn-auth parameters.
Next, you can proceed to create the server using the following command:
k3s server --token <token> --vpn-auth="name=tailscale,joinKey=<joinKey>" --node-external-ip=<TailscaleIPOfServerNode>
After executing this command, access the Tailscale admin console to approve the Tailscale node and subnet (if not already approved through autoApprovers).
Once the server is set up, connect the agents using:
k3s agent --token <token> --vpn-auth="name=tailscale,joinKey=<joinKey>" --server https://<TailscaleIPOfServerNode>:6443 --node-external-ip=<TailscaleIPOfAgentNode>
Again, approve the Tailscale node and subnet as you did for the server.
If you have ACLs activated in Tailscale, you need to add an "accept" rule to allow pods to communicate with each other. Assuming the auth key you created automatically tags the Tailscale nodes with the tag testing-k3s
, the rule should look like this:
"acls": [
{
"action": "accept",
"src": ["tag:testing-k3s", "10.42.0.0/16"],
"dst": ["tag:testing-k3s:*", "10.42.0.0/16:*"],
},
],
If you plan on running several K3s clusters using the same tailscale network, please create appropriate ACLs to avoid IP conflicts or use different podCIDR subnets for each cluster.