In this section, we will perform the bulk of our FreeBSD-specific infrastructure configurations. You won’t need to follow any specific steps in the original.
There are a bunch of steps here, but with few exceptions, most of the FreeBSD customization work happens here.
This tutorial will use the private (RFC1918) IPv4 CIDR blocks and addresses from the original tutorial, where appropriate. These are all in the 10.0.0.0/8 block; if you have a collision in your existing network, you can use another block. Note: If you do use different IP blocks, you will need to make a number of changes to files and commands.
10.240.0.0/24 – Cluster VMs and “external” endpoints (those you will need to reach from the FreeBSD host)
I very strongly recommend using these network allocations if at all possible, both to avoid having to make frequent edits and because it will be less confusing if something does not work in the cluster.
3.1.1 Pick a .local Zone for DNS
This zone just needs to resolve locally on the FreeBSD host. I’m going with hardk8s.local because who doesn’t like a bad pun?
3.2 Create the Linux VMs
3.2.1 Initialize CBSD
If you haven’t run CBSD on your FreeBSD host before, you will need to set it up. You can use the seed file at ~/src/freebernetes/harder-way/cbsd/initenv.conf. Edit it first to set node_name to your FreeBSD host’s name and to change jnameserver and nodeippool if you are using a private network other than 10.0.0.0/8.
Note that CBSD version 12.2.3 seems to have a bug where it enables cbsdrsyncd even if you configure it for one node and one SQL replica only. That’s why we are disabling it and stopping the service. (Not critical, but I get annoyed by random, unused services hanging around.)
3.2.2 Configure VM Profile
We will use the existing CBSD cloud image for Ubuntu Linux 20.04, but we want to create our own profile (VM configuration settings and options). Copy ~/src/freebernetes/harder-way/cbsd/usr.cbsd/etc/defaults/vm-linux-cloud-ubuntuserver-kubernetes-base-amd64-20.04.conf to /usr/cbsd/etc/defaults/vm-linux-cloud-ubuntuserver-kubernetes-base-amd64-20.04.conf
This profile uses a custom cloud-init field, ci_pod_cidr, to pass a different address block to each worker node, which they will use to assign unique IP addresses to pods. As CBSD does not know about this setting and does not support ad-hoc parameters, we’re going to update the bhyve VM creation script and use our own cloud-init template.
Copy ~/src/freebernetes/harder-way/cbsd/instance.jconf and update ci_gw4, ci_nameserver_search, and ci_nameserver_address as needed. If you want to set a password for the ubuntu user in case you want to log in on the console via VNC, you can assign it to cw_user_pw_user, but note this is a plain-text field.
When you run cbsd bcreate, if CBSD does not have a copy of the installation ISO image, it will prompt you asking to download it. After the first time, it will re-use the local image.
Note that you cannot yet connect to the VMs. CBSD creates a bridge interface the first time you create a VM, and we need to add our gateways to that interface. In most cases, CBSD will use the bridge1 interface.
The 10.0.0.1/32 does not act as a gateway for any of the subnets, but we’re using it as the DNS server endpoint for the virtual networks.
3.3.1 Add Bridge Gateways
~ # ifconfig bridge1 alias 10.0.0.1/32
~ # ifconfig bridge1 alias 10.32.0.1/24
~ # ifconfig bridge1 alias 10.200.0.1/16
~ # ifconfig bridge1 alias 10.240.0.1/24
~ # ifconfig bridge1
bridge1: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
Note that these changes will not survive across reboots. I have not tested if adding a persistent entry for bridge1 in /etc/rc.conf would work as expected with CBSD.
3.3.2 Configure NAT
We can reach our VM just fine from the host, but the VMs can’t talk to the Internet because only the FreeBSD host can route to this 10.0.0.0/8 block. We will use ipfw as a NAT (Network Address Translation) service. These steps will enable ipfw with open firewall rules and then configure the NAT. These changes will take effect immediately and persist across reboots.
Note that my host’s physical interface is named em0. You may have to alter some commands if yours has a different name.
We need a way to resolve our VM host names. We need to pick a private .local DNS domain, configure an authoritative server for the domain, and then set up a local caching server that knows about our domain but can also still resolve external addresses for us. We will follow this nsd/unbound tutorial closely.
3.4.1 Enable unbound for recursive/caching DNS
FreeBSD has a caching (lookup-only) DNS service called unbound in the base system. It will use the nameservers configured in the local /etc/resolv.conf for external address lookups and the local nsd service (configured next) for lookups to our private zone. Copy unbound.conf and make any edits as necessary to IP addresses or your local zone name.
You will also want to update the FreeBSD host’s /etc/resolv.conf to add your local domain to the search list and add an entry for nameserver 127.0.0.1.
We will use nsd, a lightweight, authoritative-only service, for our local zone. After copying the files, you can edit/rename the copied files before proceeding to make changes as necessary to match your local domain or IP addresses.
What a fantastic and interesting job you’ve done! I will definitely try!
Question – as far as I understand, you are not using any K8S CNI ( calico, flannel, … ). How your cluster works with multiple nodes ( ip address for pod, connectivity ? )
It is actually using a CNI plugin (https://github.com/containernetworking/plugins) although it just creates a basic bridge for the container network. Most CNI plugins should work fine on this cluster, which does actually have three worker nodes, and I’ve tested pod connectivity between nodes. A simple test for full CNI functionality would be to install Calico and test a NetworkPolicy.
What a fantastic and interesting job you’ve done! I will definitely try!
Question – as far as I understand, you are not using any K8S CNI ( calico, flannel, … ). How your cluster works with multiple nodes ( ip address for pod, connectivity ? )
LikeLike
It is actually using a CNI plugin (https://github.com/containernetworking/plugins) although it just creates a basic bridge for the container network. Most CNI plugins should work fine on this cluster, which does actually have three worker nodes, and I’ve tested pod connectivity between nodes. A simple test for full CNI functionality would be to install Calico and test a NetworkPolicy.
LikeLike