Adventures in Freebernetes: Certs, Certs, DNS, More Certs

Part 10 of experiments in FreeBSD and Kubernetes: More Net Work Plus Initial Cluster Creation Tasks

See all posts in this series

Table of Contents

In the previous post in this series, I had finished creating a raw image of Ubuntu Server 20.04 with cloud-init configured for use with CBSD to create bhyve virtual machines. I also installed the tools I would need to work through the Kubernetes the Hard Way tutorials to create a Kubernetes cluster completely manually, and I configured ipfw on my FreeBSD hypervisor to act as a NAT (Network Address Translation service) so my VMs, which have their own private network, can still reach sites on the Internet. I will still have a few details to work out, mainly around Kubernetes cluster networking, but I should be ready to start.

A few details for reference:

  • My hypervisor is named nucklehead (it’s an Intel NUC) and is running FreeBSD 13.0-CURRENT
  • My home network, including the NUC, is in the 192.168.0.0/16 space
  • The Kubernetes cluster will be in the 10.0.0.0/8 block, which exists solely on my FreeBSD host
  • Yes, I am just hanging out in a root shell on the hypervisor.

Provisioning Compute Resources

I’ve already done the relevant initial steps and skipped those that are specific to GCP, so now I need to create the VMs for the cluster. In the previous post, I generated a VM settings file to use with CBSD in order to make consistent creation simpler from the command-line.

Rabbit Hole #1: DNS

I started creating VMs before I remembered one network detail I had punted in the previous post: hostname resolution. I could just do low-initial-friction-but-potentially-annoying-later-on solution of hardcoding the cluster members into /etc/hosts on all the VMs and the hypervisor, but that solution, while perfectly serviceable, would not be the harder way of solving my resolution issues. Yes, I need to set up a DNS server locally on FreeBSD.

FreeBSD offers the recursive-only server Unbound in its base system. However, I need an authoritative server for my .local domain. NSD (Name Server Daemon) can only operate as an authoritative server, so between the two, my DNS resolution problems would be solved. I install the dns/nsd port and start configuring.

I follow this Unbound/NSD tutorial for setting up the two services on FreeBSD to serve a private domain. I name my domain something.local because I just can’t think of anything clever right now. I get both services configured and now I can resolve stuff from the host.

root@nucklehead:~ # host http://www.google.com
http://www.google.com has address 142.250.72.196
http://www.google.com has IPv6 address 2607:f8b0:4005:80a::2004
root@nucklehead:~ # host worker-0.something.local
worker-0.something.local has address 10.10.0.20
root@nucklehead:~ #
view raw gistfile1.txt hosted with ❤ by GitHub
I don’t know why WordPress forces http:// on “www.google.com” but, no, it’s not in the gist, and yes, it’s annoying the crap out of me

Creating Instances

The Kubernetes the Hard Way tutorial uses the GCE e2-standard-2 GCE machine type, which have 2 virtual CPUs and 8Gb RAM. I need three VMs for the control plane and three for the worker nodes.

Apparently I also need to pass the pod CIDR for each node (they can’t share address ranges) in via the VM metadata. I will quickly try to accomplish this by creating a new cloud-init template for CBSD (by copying the centos7 template I’m currently using) in /usr/local/cbsd/modules/bsdconf.d/cloud-tpl and add a field for called pod-cidr in the meta-data file. If it works, great. If not, I may try to find another way to inject/distribute the value. Oh, who am I kidding. I will fiddle with it endlessly until I find a way to shove that value through cloud-init.

It turns out I also needed to update the default cloud-init template’s network configuration format from version 1 to version 2, because some settings, such as the default gateway, were not getting applied.

instance-id: %%ci_jname%%
local-hostname: %%ci_fqdn%%
pod-cidr: %%ci_pod_cidr%%
view raw meta-data hosted with ❤ by GitHub

I add the ci_pod_cidr field to my VM settings file and update my VM template in /usr/cbsd/etc/defaults to use the new cloud-init template. I also have to add the new variable to the script /usr/cbsd/modules/cloudinit so it will be interpolated in the generated files.

For reference, here are the values I am using when creating my cluster instances:

  • VM size: 2 CPUs, 8 Gb RAM
  • Node network CIDR block: 10.10.0.0/24
  • Pod network CIDR block: 10.100.0.0/16
  • Root disk: 10Gb Ok, the tutorial uses 200Gb root disks, but that’s the size of my entire ZFS pool, so no. If I regret this later, I should be able to resize the underlying ZFS volumes. Theoretically.
root@nucklehead:~ # for i in 0 1 2; do
cbsd bcreate jconf=/root/instance.jconf jname="controller-$i" \
ci_ip4_addr="10.10.0.1${i}/8" ci_jname="controller-$i" \
ci_fqdn="controller-${i}.something.local" ip_addr="10.10.0.1${i}"
done
Global VM ZFS guid: 1697657698115524445
To edit VM properties use: cbsd bconfig jname=controller-0
To start VM use: cbsd bstart controller-0
To stop VM use: cbsd bstop controller-0
To remove VM use: cbsd bremove controller-0
For attach VM console use: cbsd blogin controller-0
Creating controller-0 complete: Enjoy!
auto-generate cloud-init settings: /usr/cbsd/jails-system/controller-0/cloud-init
Global VM ZFS guid: 3355365328885916490
To edit VM properties use: cbsd bconfig jname=controller-1
To start VM use: cbsd bstart controller-1
To stop VM use: cbsd bstop controller-1
To remove VM use: cbsd bremove controller-1
For attach VM console use: cbsd blogin controller-1
Creating controller-1 complete: Enjoy!
auto-generate cloud-init settings: /usr/cbsd/jails-system/controller-1/cloud-init
Global VM ZFS guid: 14710727303935464711
To edit VM properties use: cbsd bconfig jname=controller-2
To start VM use: cbsd bstart controller-2
To stop VM use: cbsd bstop controller-2
To remove VM use: cbsd bremove controller-2
For attach VM console use: cbsd blogin controller-2
Creating controller-2 complete: Enjoy!
auto-generate cloud-init settings: /usr/cbsd/jails-system/controller-2/cloud-init
root@nucklehead:~ # for i in 0 1 2; do
cbsd bstart controller-$i
done
cloud-init: enabled
vm_iso_path: cloud-ubuntuserver-base-amd64-20.04.1
cloud init image initialization..
Clone cloud image into first/system vm disk (zfs clone method)
/sbin/zfs get -Ht snapshot userrefs zroot/ROOT/default/cbsd-cloud-cloud-ubuntuserver-base-amd64-20.04.1.raw@boot-controller-0
Eject cloud source: media mode=detach name=cloud-ubuntuserver-base-amd64-20.04.1 path=/usr/cbsd/src/iso/cbsd-cloud-cloud-ubuntuserver-base-amd64-20.04.1.raw type=iso jname=controller-0
DELETE FROM media WHERE name="cloud-ubuntuserver-base-amd64-20.04.1" AND path="/usr/cbsd/src/iso/cbsd-cloud-cloud-ubuntuserver-base-amd64-20.04.1.raw" AND jname="controller-0"
vm_iso_path: changed
Detach to: controller-0
All CD/ISO ejected: controller-0
VRDP is enabled. VNC bind/port: 192.168.0.7:5901
For attach VM console, use: vncviewer 192.168.0.7:5901
Resolution: 1024×768.
em0
bhyve renice: 1
Execute master script: cloud_init_set_netname.sh
:: /usr/cbsd/jails-system/controller-0/master_prestart.d/cloud_init_set_netname.sh
Waiting for PID.
PID: 25683
CBSD setup: bhyve ipfw counters num: 99/102
cloud-init: enabled
vm_iso_path: cloud-ubuntuserver-base-amd64-20.04.1
cloud init image initialization..
Clone cloud image into first/system vm disk (zfs clone method)
/sbin/zfs get -Ht snapshot userrefs zroot/ROOT/default/cbsd-cloud-cloud-ubuntuserver-base-amd64-20.04.1.raw@boot-controller-1
Eject cloud source: media mode=detach name=cloud-ubuntuserver-base-amd64-20.04.1 path=/usr/cbsd/src/iso/cbsd-cloud-cloud-ubuntuserver-base-amd64-20.04.1.raw type=iso jname=controller-1
DELETE FROM media WHERE name="cloud-ubuntuserver-base-amd64-20.04.1" AND path="/usr/cbsd/src/iso/cbsd-cloud-cloud-ubuntuserver-base-amd64-20.04.1.raw" AND jname="controller-1"
vm_iso_path: changed
Detach to: controller-1
All CD/ISO ejected: controller-1
VRDP is enabled. VNC bind/port: 192.168.0.7:5902
For attach VM console, use: vncviewer 192.168.0.7:5902
Resolution: 1024×768.
em0
bhyve renice: 1
Execute master script: cloud_init_set_netname.sh
:: /usr/cbsd/jails-system/controller-1/master_prestart.d/cloud_init_set_netname.sh
Waiting for PID.
PID: 27304
CBSD setup: bhyve ipfw counters num: 103/104
cloud-init: enabled
vm_iso_path: cloud-ubuntuserver-base-amd64-20.04.1
cloud init image initialization..
Clone cloud image into first/system vm disk (zfs clone method)
/sbin/zfs get -Ht snapshot userrefs zroot/ROOT/default/cbsd-cloud-cloud-ubuntuserver-base-amd64-20.04.1.raw@boot-controller-2
Eject cloud source: media mode=detach name=cloud-ubuntuserver-base-amd64-20.04.1 path=/usr/cbsd/src/iso/cbsd-cloud-cloud-ubuntuserver-base-amd64-20.04.1.raw type=iso jname=controller-2
UPDATE media SET jname='-' WHERE jname="controller-2" AND name="cloud-ubuntuserver-base-amd64-20.04.1" AND path="/usr/cbsd/src/iso/cbsd-cloud-cloud-ubuntuserver-base-amd64-20.04.1.raw"
vm_iso_path: changed
Detach to: controller-2
All CD/ISO ejected: controller-2
VRDP is enabled. VNC bind/port: 192.168.0.7:5903
For attach VM console, use: vncviewer 192.168.0.7:5903
Resolution: 1024×768.
em0
bhyve renice: 1
Execute master script: cloud_init_set_netname.sh
:: /usr/cbsd/jails-system/controller-2/master_prestart.d/cloud_init_set_netname.sh
Waiting for PID.
PID: 29054
CBSD setup: bhyve ipfw counters num: 106/107
root@nucklehead:~ #
view raw gistfile1.txt hosted with ❤ by GitHub

Then I do the same thing again, except the hostnames have the prefix worker- and I also pass the ci_pod_cidr key-value pair to cbsd bcreate.

root@nucklehead:~ # cbsd bls
JNAME JID VM_RAM VM_CURMEM VM_CPUS PCPU VM_OS_TYPE IP4_ADDR STATUS VNC
controller-0 25683 8192 0 2 0 linux 10.10.0.10 On 192.168.0.7:5901
controller-1 27304 8192 0 2 0 linux 10.10.0.11 On 192.168.0.7:5902
controller-2 29054 8192 0 2 0 linux 10.10.0.12 On 192.168.0.7:5903
ubuntu-base 0 1024 0 1 0 linux DHCP Off 192.168.0.7:5900
worker-0 40784 8192 0 2 0 linux 10.10.0.20 On 192.168.0.7:5904
worker-1 42604 8192 0 2 0 linux 10.10.0.21 On 192.168.0.7:5905
worker-2 44492 8192 0 2 0 linux 10.10.0.22 On 192.168.0.7:5906
root@nucklehead:~ #
view raw gistfile1.txt hosted with ❤ by GitHub
My cluster VMs, plus the ubuntu-base VM, which is turned off

The instances are all running with the correct hostnames and IP addresses, and DNS resolves properly on the host and in the VMs. CBSD sets the SSH key for the ubuntu user in the VM, so everything is ready for the next step.

Generating the Certificate Authority and Certs

This section is mostly pretty straightforward, other than replacing the gcloud commands with appropriate calls and omitting the non-existent external IP address. Since I setup DNS, I could just do this to look up the instance IP addresses: INTERNAL_IP=$(host $instance | awk '{print $4}')

Rabbit Hole #2: Fake Load Balancing

However, when it comes time to generate the API server certificate, I need to supply the IP address for the endpoint, which, in the tutorial, is a Google Cloud load balancer. As I mentioned in the previous post, I don’t really want to run my own load balancer service for this experiment. I thought I could use FreeBSD’s carp(4) module to simulate basic load balancer functionality, but it doesn’t quite work that way.

However, I can use ipfw(4) module. I’m already using ipfw to handle Network Address Translation for my cluster network.

root@nucklehead:~ # ipfw add 200 prob 0.33 fwd 10.10.0.10 ip from any to 10.10.0.1 keep-state
00200 prob 0.330000 fwd 10.10.0.10 ip from any to 10.10.0.1 keep-state :default
root@nucklehead:~ # ipfw add 201 prob 0.5 fwd 10.10.0.11 ip from any to 10.10.0.1 keep-state
00201 prob 0.500000 fwd 10.10.0.11 ip from any to 10.10.0.1 keep-state :default
root@nucklehead:~ # ipfw add 202 fwd 10.10.0.12 ip from any to 10.10.0.1 keep-state
00202 fwd 10.10.0.12 ip from any to 10.10.0.1 keep-state :default
root@nucklehead:~ # for i in 0 1 2; do
ssh -i ~cbsd/.ssh/id_rsa ubuntu@controller-$i sudo ip address add 10.10.0.1/32 dev enp0s5:1
done
root@nucklehead:~ # ssh -o StrictHostKeyChecking=false -i ~cbsd/.ssh/id_rsa ubuntu@10.10.0.1
[…]
ubuntu@controller-2:~$ logout
Connection to 10.10.0.1 closed.
root@nucklehead:~ # ssh -o StrictHostKeyChecking=false -i ~cbsd/.ssh/id_rsa ubuntu@10.10.0.1
[…]
ubuntu@controller-0:~$
view raw gistfile1.txt hosted with ❤ by GitHub

Here I create forwarding rules, one for each of the three controller hosts. Each rule matches traffic from any source to 10.10.0.1, the IP address I chose for the kube-apiserver. However, they each forward to a different controller host. The prob argument assigns a 1/3 chance of matching rule 200, a 1/2 chance (as rule 200 would already have failed to match) of matching rule 201, and rule 202 has a 100% chance of matching if the previous two rules failed. This effectively gives each backend a 33% chance of receiving a given connection that was made to 10.10.0.1. The keep-state argument ensures that all the packets in a TCP stream go to the same backend server.

Perhaps not so coincidentally, kube-proxy in iptables mode uses an analogous version of round-robin load balancing traffic to a Service‘s virtual IP address across the backend pods for that service.

Anyway, I also add the kube-apiserver‘s 10.10.0.1 virtual IP to each of the controllers so they will accept traffic to that IP. And, as you can see above, if I ssh (the only TCP service the VMs had running at this point) to 10.10.0.1, I get a different host when I connect.

I also add an A record for kubernetes and the new IP to my something.local zone.

Back to the Certificates

Now that I have the virtual endpoint set up, I can use its IP address to generate the kube-apiserver certificate.

Generating Kubernetes Configuration Files for Authentication

This section is very straightforward. The only modifications necessary involved replacing a couple gcloud commands.

Generating the Data Encryption Config and Key

This section requires swapping the Linux standard base64 command with FreeBSD’s b64encode: ENCRYPTION_KEY=$(head -c 32 /dev/urandom | b64encode -r -)


Now that my cluster has all the certificates and other cluster authentication files, the next post will pick up at the next step: bootstrapping etcd.

Sources / References

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Blog at WordPress.com.

Up ↑

%d bloggers like this: