Adventures in Freebernetes: Getting Ready to do Kubernetes the Harder Way

Part 9 of experiments in FreeBSD and Kubernetes: Prep Work for Creating a VM Kubernetes Cluster

See all posts in this series

Table of Contents

Overview

Even though I said this series was about the possibilities of mixing Kubernetes and FreeBSD when I started, I have been examining the state of virtualization, focused on bhyve and mainly Linux VMs and not really talking about Kubernetes, until now.

FreeBSD supports several x86 or OS-level virtualization options, including Xen and VirtualBox, plus the venerable FreeBSD jail(2) system (which I plan to work with in later posts), not to mention the fact that FreeBSD supports Linux binary compatibility, and let me tell you, I have not-so-warm memories chasing down all the Linux libraries I needed to run dynamically-linked binaries. However, for the experiments in this series using full virtualization of OS environments, I’ve focused on bhyve, part of the FreeBSD base system, both because it’s pretty powerful and it doesn’t require installing third-party software to use. Also, bhyve has never sparked a forced reboot of what really seemed like half of all AWS EC2 instances.

Unfortunately, no way currently exists to run Kubernetes nodes natively on the FreeBSD operating system, without use of Linux or Windows emulation or virtualization. You can still build a Kubernetes node (or cluster!) on top of a FreeBSD system, but you need to pull in another operating system somewhere to get it running.

I plan on following Kelsey Hightower‘s Kubernetes the Hard Way tutorial for creating a Kubernetes cluster manually, without any sophisticated Kubernetes installation tools. Instead of installing on Google Compute Engine, I will be creating the entire Kubernetes cluster on my FreeBSD host, using Ubuntu VMs in bhyve, managed with CBSD, for the individual nodes. (CBSD is a third-party package, but as I demonstrated in earlier posts in this series, managing bhyve VMs requires a lot of manual steps, which CBSD greatly simplifies.)

I’m following the Kubernetes the Hard Way tutorial in part to enhance my understanding of the myriad interlocked components of a cluster, but also to see what, if any, adaptations may be required to get a functional cluster up in this bhyve virtualized environment.

Version List

  • FreeBSD host/hypervisor: 13.0-CURRENT, build Oct 22 (I should probably update this)
  • CBSD: 12.2.3
  • Ubuntu Server: 20.04
  • Kubernetes: 1.18.6

Make the Net Work

Networking is going to be the biggest difference from the tutorial, which relies on the particular networking capabilities in Google Cloud Project. A few things I will need to figure out:

  • hostname resolution. I’ll be giving the nodes static IP addresses, but they still need to be able to resolve to hostnames. I could hardcode each hostname into /etc/hosts everywhere, which I may end up doing anyway, but I’ll check if there’s an obvious, simple solution I can run on FreeBSD.
  • load balancing: I don’t really want to run my own load balancer on the hypervisor. Fortunately, FreeBSD supports the Common Address Redundancy Protocol, which allows multiple hosts to share a virtual IP address for service load balancing and failover. I just need to load the carp(4) (yes, I keep typing “crap”) kernel module and configure it.
  • Kubernetes CNI (Container Network Interface): I think the standard kubenet plugin will work in my little VLAN, but I’ll have to see when I get to that point.

Once again, I’ll be using the FreeBSD bridge(4) (since I’m practically an expert now) (not really) and a private RFC1918 IPv4 block for all the cluster networks.

Great Image Bake-Off Season 2

CBSD already has support for cloud-booting Ubuntu Server 20.04, but I decide to build my own disk image locally. I worked out the steps while creating the CBSD cloud configuration for Alpine Linux, which took a lot of trial and error to get working, so I may as well put that experience to use.

As I did with Alpine, I start by creating a VM from the Ubuntu installation ISO image. CBSD has a VM profile for a manual install of Ubuntu Server 20.04, so that simplifies things. I run cbsd bconstruct-tui and configure my VM.

Screenshot of CBSD UI to create Ubuntu VM
Global VM ZFS guid: 10282600222672768822
To edit VM properties use: cbsd bconfig jname=ubuntu-base
To start VM use: cbsd bstart ubuntu-base
To stop VM use: cbsd bstop ubuntu-base
To remove VM use: cbsd bremove ubuntu-base
For attach VM console use: cbsd blogin ubuntu-base
Creating ubuntu-base complete: Enjoy!
root@nucklehead:~ # cbsd bstart ubuntu-base
Looks like /usr/cbsd/vm/ubuntu-base/dsk1.vhd is empty.
May be you want to boot from CD?
[yes(1) or no(0)]
yes
Temporary boot device: cd
vm_iso_path: iso-Ubuntu-Server-20.04.1-amd64
media found: iso-Ubuntu-Server-20.04.1-amd64 –> /usr/cbsd/src/iso/cbsd-iso-ubuntu-20.04.1-live-server-amd64.iso
VRDP is enabled. VNC bind/port: 192.168.0.11:5900
For attach VM console, use: vncviewer 192.168.0.11:5900
Resolution: 1024×768.
em0
bhyve renice: 1
Waiting for PID.
PID: 6098
root@nucklehead:~ #
view raw gistfile1.txt hosted with ❤ by GitHub

I use all the Ubuntu installer defaults, with the exception of disabling LLVM. The VM has a 10Gb virtual hard drive, and other than the EFI partition, it all goes to the root file system, with no swap space. I also enable sshd.

After the installation finishes, I restart the VM via CBSD, which automatically unmounts the installation medium, and I check to make sure that cloud-init will actually run at boot. Not going to fool me twice! (The Ubuntu installer had already installed it and configured it to run at boot.)

I stop the VM and dd the virtual hard drive image into ~cbsd/src/iso.

Next I need to create the CBSD profile. I copy the CBSD-distributed cloud configuration file in ~cbsd/etc/defaults for Ubuntu Server and update it to use my image.

The profile uses the centos7 cloud-init templates because most of the CBSD-distributed profiles use it, so who am I to argue? Alpine Linux did have some incompatibility issues with this template. If Ubuntu also has issues, I’ll have to debug that.

It Doesn’t Work On My Machine

I should test the image and configuration. Oh, right. Thus far, I’ve been creating VMs by using the CBSD text UI. I should probably figure out the command-line method, right?

It turns out that cbsd bconstruct-tui will output a configuration file if you opt not to create the VM immediately, which I’d been doing. I run cbsd bconstruct-tui, configure my VM, and opt not to create the VM immediately. It creates a .jconf file for me, which I copy for later use. Then I create the VM.

You can make now: cbsd bcreate jconf=/usr/cbsd/ftmp/ubuntu-test1.76649.jconf
root@nucklehead:~ # cp /usr/cbsd/ftmp/ubuntu-test1.76649.jconf ubuntu-test.jconf
root@nucklehead:~ # cbsd bcreate jconf=/usr/cbsd/ftmp/ubuntu-test1.76649.jconf
Global VM ZFS guid: 14144331331543102829
To edit VM properties use: cbsd bconfig jname=ubuntu-test1
To start VM use: cbsd bstart ubuntu-test1
To stop VM use: cbsd bstop ubuntu-test1
To remove VM use: cbsd bremove ubuntu-test1
For attach VM console use: cbsd blogin ubuntu-test1
Creating ubuntu-test1 complete: Enjoy!
auto-generate cloud-init settings: /usr/cbsd/jails-system/ubuntu-test1/cloud-init
root@nucklehead:~ # cbsd bstart ubuntu-test1
cloud-init: enabled
vm_iso_path: cloud-ubuntuserver-base-amd64-20.04.1
Original size: 10g, real referenced size/data: 2g
Converting /usr/cbsd/src/iso/cbsd-cloud-cloud-ubuntuserver-base-amd64-20.04.1.raw -> /dev/zvol/zroot/ROOT/default/cbsd-cloud-cloud-ubuntuserver-base-amd64-20.04.1.raw: 10g…
WIP: [0%…5%…21%…35%…35%…46%…55%…66%…78%…94%…96%…99%…100%]
2560+1 records in
2560+1 records out
10737549312 bytes transferred in 40.258180 secs (266717205 bytes/sec)
cloud init image initialization..
Clone cloud image into first/system vm disk (zfs clone method)
/sbin/zfs get -Ht snapshot userrefs zroot/ROOT/default/cbsd-cloud-cloud-ubuntuserver-base-amd64-20.04.1.raw@boot-ubuntu-test1
Eject cloud source: media mode=detach name=cloud-ubuntuserver-base-amd64-20.04.1 path=/usr/cbsd/src/iso/cbsd-cloud-cloud-ubuntuserver-base-amd64-20.04.1.raw type=iso jname=ubuntu-test1
UPDATE media SET jname='-' WHERE jname="ubuntu-test1" AND name="cloud-ubuntuserver-base-amd64-20.04.1" AND path="/usr/cbsd/src/iso/cbsd-cloud-cloud-ubuntuserver-base-amd64-20.04.1.raw"
vm_iso_path: changed
Detach to: ubuntu-test1
All CD/ISO ejected: ubuntu-test1
VRDP is enabled. VNC bind/port: 192.168.0.7:5901
For attach VM console, use: vncviewer 192.168.0.7:5901
Resolution: 1024×768.
em0
bhyve renice: 1
Execute master script: cloud_init_set_netname.sh
:: /usr/cbsd/jails-system/ubuntu-test1/master_prestart.d/cloud_init_set_netname.sh
Waiting for PID.
PID: 83147
root@nucklehead:~ #
view raw gistfile1.txt hosted with ❤ by GitHub

CBSD imports my raw disk image and starts the VM. But when my VM boots up, it still has the ubuntu-base hostname from the disk image. It looks like cloud-init did run, but my values for this VM did not get passed in.

Fixing cloud-init

I dig in a bit. CBSD uses the NoCloud data source for passing the generated cloud-init files through a mounted ISO image into the VM at creation time. I assume that CBSD created the image and mount, as I’ve had no problems with booting other cloud images. Even though my installation of Ubuntu had automatically installed and enabled cloud-init (remember, I checked!), it does not seem to read the configuration on the mounted image.

It looks like I need to add a NoCloud block to the configuration in /etc/cloud/cloud.cfg. I destroy this VM and start my ubuntu-base VM to update my reference image. Then I generate a new raw image and create a new cloud VM from that. Yay, that has my new hostname, but it is still using DHCP to configure the network interface, instead of using the static IP assigned to the VM.

It turns out the Ubuntu installer disables network configuration by cloud-init, hinted at by the log entry network config disabled by system_cfg in /var/log/cloud-init.log. I remove the files /etc/cloud/cloud.cfg.d/99_installer.cfg, /etc/cloud/cloud.cfg.d/subiquity-disable-cloudinit-networking.cfg and /etc/netplan/00-installer-config.yaml, then generate a new disk image yet again. This time, it boots successfully with the static IP address I configured in CBSD.

Bridge Redux

Ok, great. Now I go to ssh into the new VM (up until now, I had been connecting to the console via VNC), but… it hangs. I double check the bridge1 interface, and yes, it has the 10.0.0.0 inet alias. From the ubuntu-test1 console, I can ssh to the hypervisor’s 192.168.0.0/24 IP address just fine.

This one was a head scratcher, but finally I realized ubuntu-test1 would not let me add the bridge’s 10.0.0.0 address as the default route: Network is unreachable. This same address had totally worked with my Debian VM. (Never mind, I had actually been setting the VM’s own IP as the gateway, which did work. I still need to reread Stevens Vol 1.) Finally I try to give the bridge a different IP address in the 10.0.0.0/8 block, and lo, that worked. Ok, whatever.

Cookie Cutters

I want to test one more thing: whether I can use a single .jconf template for creating multiple VMs in CBSD. The jconf file I had generated and used for ubuntu-test1 had all the values hardcoded into it, but I need something more convenient for passing the handful of settings that need per-VM values, like the VM name and the static IP address. I edit my file, commenting out all the values that have the VM’s name (some of these are derived, such as local paths for VM state). Then I create the VM from the template, passing the unique values on the command-line.

root@nucklehead:~ # cbsd bcreate jconf=/root/ubuntu-base.jconf jname=ubuntu-test2 ci_ip4_addr='10.0.0.3' ci_jname='ubuntu-test2'
Global VM ZFS guid: 993199076675660105
To edit VM properties use: cbsd bconfig jname=ubuntu-test2
To start VM use: cbsd bstart ubuntu-test2
To stop VM use: cbsd bstop ubuntu-test2
To remove VM use: cbsd bremove ubuntu-test2
For attach VM console use: cbsd blogin ubuntu-test2
Creating ubuntu-test2 complete: Enjoy!
auto-generate cloud-init settings: /usr/cbsd/jails-system/ubuntu-test2/cloud-init
root@nucklehead:~ # cbsd bstart ubuntu-test2
cloud-init: enabled
vm_iso_path: cloud-ubuntuserver-base-amd64-20.04.1
cloud init image initialization..
Clone cloud image into first/system vm disk (zfs clone method)
/sbin/zfs get -Ht snapshot userrefs zroot/ROOT/default/cbsd-cloud-cloud-ubuntuserver-base-amd64-20.04.1.raw@boot-ubuntu-test2
Eject cloud source: media mode=detach name=cloud-ubuntuserver-base-amd64-20.04.1 path=/usr/cbsd/src/iso/cbsd-cloud-cloud-ubuntuserver-base-amd64-20.04.1.raw type=iso jname=ubuntu-test2
UPDATE media SET jname='-' WHERE jname="ubuntu-test2" AND name="cloud-ubuntuserver-base-amd64-20.04.1" AND path="/usr/cbsd/src/iso/cbsd-cloud-cloud-ubuntuserver-base-amd64-20.04.1.raw"
vm_iso_path: changed
Detach to: ubuntu-test2
All CD/ISO ejected: ubuntu-test2
VRDP is enabled. VNC bind/port: 192.168.0.7:5902
For attach VM console, use: vncviewer 192.168.0.7:5902
Resolution: 1024×768.
em0
bhyve renice: 1
Execute master script: cloud_init_set_netname.sh
:: /usr/cbsd/jails-system/ubuntu-test2/master_prestart.d/cloud_init_set_netname.sh
Waiting for PID.
PID: 49948
root@nucklehead:~ # ssh -i /usr/cbsd/.ssh/id_rsa ubuntu@10.0.0.3
The authenticity of host '10.0.0.3 (10.0.0.3)' can't be established.
ECDSA key fingerprint is SHA256:kpTcpbEQmmvKZD3WIp5f3mT906mW34iwxsi1YK/GQhI.
No matching host key fingerprint found in DNS.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.0.0.3' (ECDSA) to the list of known hosts.
Welcome to Ubuntu 20.04.1 LTS (GNU/Linux 5.4.0-54-generic x86_64)
[…]
ubuntu@ubuntu-test2:~$
view raw gistfile1.txt hosted with ❤ by GitHub

Command and Ctl

Doing Kubernetes the Hard Way requires several command-line tools. Fortunately, people have already contributed the FreeBSD ports for all of them, so I start compiling. (I did not check if there were pre-compiled packages.)

  • sysutils/tmux
  • security/cfssl (includes cfssl and cfssljson utilities)
  • sysutils/kubectl — As I’m building this on FreeBSD 13.0-CURRENT the port defaults to version 1.19.4, but that probably won’t pose a problem
  • devel/etcd34 — We don’t need to run the etcd service on the hypervisor, but I do need the etcdctl tool. (etcdctl isn’t listed in the prerequisites, but I read ahead. Yes, I was That Kid.)

NAT Done Yet

Oh, there’s just one small detail. My VMs on the 10.0.0.0/8 network can connect to the hypervisor and vice versa, but my router doesn’t know about that private network or how to route it. I could add a static route to FreeBSD’s “public” IP, but I would rather find a more portable and FreeBSD-centric solution. I need a NAT (Network Address Translation) service. I can use FreeBSD’s firewall, ipfw(4), to create an in-kernel NAT service.

The ipfw module’s defaults make it pretty easy to shoot yourself in the foot, because it automatically blocks all traffic when you load it, which would include, say, your ssh connection. I need to disable deny-all behavior, load the kernel modules, and then add the NAT and its routing rules.

root@nucklehead:~ # kenv net.inet.ip.fw.default_to_accept=1
net.inet.ip.fw.default_to_accept="1"
root@nucklehead:~ # kldload ipfw ipfw_nat
root@nucklehead:~ # sysctl net.inet.ip.fw.enable
net.inet.ip.fw.enable: 1
root@nucklehead:~ # sysctl net.inet.ip.forwarding=1
net.inet.ip.forwarding: 1 -> 1
root@nucklehead:~ # sysctl net.inet6.ip6.forwarding=1
net.inet6.ip6.forwarding: 1 -> 1
root@nucklehead:~ # sysctl net.inet.tcp.tso=0
net.inet.tcp.tso: 1 -> 0
root@nucklehead:~ # ipfw -q nat 1 config if em0 same_ports unreg_only reset
root@nucklehead:~ # ipfw disable one_pass
root@nucklehead:~ # ipfw add 1 allow ip from any to any via lo0
00001 allow ip from any to any via lo0
root@nucklehead:~ # ipfw add 100 reass all from any to any in
00100 reass ip from any to any in
root@nucklehead:~ # ipfw add 101 check-state
00101 check-state :default
root@nucklehead:~ # ipfw add 105 nat 1 ip from 10.0.0.0/8 to any out via em0
00105 nat 1 ip from 10.0.0.0/8 to any out via em0
root@nucklehead:~ # ipfw add 110 nat 1 ip from any to any in via em0
00110 nat 1 ip from any to any in via em0
root@nucklehead:~ # ipfw show
00001 0 0 allow ip from any to any via lo0
00100 3024 691781 reass ip from any to any in
00101 0 0 check-state :default
00105 272 20364 nat 1 ip from 10.0.0.0/8 to any out via em0
00110 1182 294309 nat 1 ip from any to any in via em0
65535 87877 36185708 allow ip from any to any
root@nucklehead:~ #
view raw gistfile1.txt hosted with ❤ by GitHub

Now my VMs can make connections to the Internet, which is useful for installing stuff and that sort of thing.


With that list of tasks done, I should be ready to start creating my Kubernetes cluster from the (virtual) machine up. My next post will start from there.

Sources / References

7 thoughts on “Adventures in Freebernetes: Getting Ready to do Kubernetes the Harder Way

Add yours

    1. Yes, I think that’s the way most/all cloud providers handle it. I’ve used AWS EC2 and GCE the most, and the instances pull user-data from the instance metadata endpoint.

      The NoCloud option also lets you specify an endpoint for your own HTTP server. That’s probably a simpler option for most users, rather than creating and temporarily mounting ISO images as CBSD does.

      Like

      1. Thanks. I am assuming (besides the HTTP server for metadata) that a PXE server installation in the cloud is also required for non-noCloud data source

        Like

      2. For bare metal, probably. For virtualized environments, I don’t think most systems use PXE at all. The hypervisor just copies the source disk image to the new VM’s virtual disk and the VM boots from that directly.

        Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Blog at WordPress.com.

Up ↑

%d bloggers like this: