Page 8: Bootstrapping the Kubernetes Control Plane
Most of this section should be executed as written, with the following exceptions:
- Set
INTERNAL_IP
as we did on the previous page - If you’re on a slow Internet connection, you may also want to download the tarballs once and copy them from your FreeBSD host to the controller VMs.
- If you are using your own IP address ranges, you will need to make some additional changes.
- In place of the last section, The Kubernetes Frontend Load Balancer, follow these instructions:
The (Replacement) Kubernetes Frontend Load Balancer
We have three control plane servers, each running the Kubernetes API service. In a production system, we would want to use a load balancer that could also perform service health checks and avoid sending requests to unavailable servers, like the Google Cloud load balancing service in the original, but for this experimental cluster, we can just use a simple round-robin method using ipfw
rules by creating a virtual IP address, 10.240.0.2
.
First we create the rules, then we need to configure the three controllers to accept traffic for the virtual IP address. Note this second IP address will not persists across VM reboots. You will need to add 10.240.0.2/
32 to the addresses
field in /etc/netplan/50-cloud-init.yaml
if you want to save the configuration.
These rules look odd, but each of the three controllers should get 1/3 of the requests over time. (Rules are evaluated in numerical order.)
- Rule 300 gives
10.240.0.10
a 1/3 probability - Rule 301 gives
10.240.0.11
a .5 probability, but that’s half of the remaining two of the original three slots, so it has a 1/2 * 2/3 = 1/3 chance overall - Rule 302 gives
10.240.0.12
a 100% probability of being the final 1/3 hosts selected.
After this section, we can exit the controller tmux
session.
What a fantastic and interesting job you’ve done! I will definitely try!
Question – as far as I understand, you are not using any K8S CNI ( calico, flannel, … ). How your cluster works with multiple nodes ( ip address for pod, connectivity ? )
LikeLike
It is actually using a CNI plugin (https://github.com/containernetworking/plugins) although it just creates a basic bridge for the container network. Most CNI plugins should work fine on this cluster, which does actually have three worker nodes, and I’ve tested pod connectivity between nodes. A simple test for full CNI functionality would be to install Calico and test a NetworkPolicy.
LikeLike