Step-by-step tutorial for manually deploying a Kubernetes cluster on FreeBSD bhyve VMs
See all posts in the FreeBSD Virtualization Series
- Overview
- Page 1: Prerequisites
- Page 2: Installing Client Tools
- Page 3: Compute Resources
- Page 4: Provisioning a CA and Generating TLS Certificates
- Page 5: Generating Kubernetes Configuration Files for Authentication
- Page 6: Generating the Data Encryption Config and Key
- Page 7: Bootstrapping the etcd Cluster
- Page 8: Bootstrapping the Kubernetes Control Plane
- Page 9: Bootstrapping the Kubernetes Worker Nodes
- Page 10: Configuring kubectl for Remote Access
- Page 11: Provisioning Pod Network Routes
- Page 12: Deploying the DNS Cluster Add-on
- Page 13: Smoke Test
- Page 14: Cleaning Up
- Sources / References
Overview
This tutorial will take you step-by-step through setting up a fully-functional Kubernetes cluster installed on bhyve
virtual machines (VMs) on a single FreeBSD host/hypervisor. We’ll be following Kelsey Hightower’s Kubernetes the Hard Way tutorial based on Kubernetes version 1.18.6, adapting it for the FreeBSD environment. While the VM guests that make up the Kubernetes cluster run Ubuntu Linux, we will use native FreeBSD functionality and tools as much as possible for the virtual infrastructure underneath, such as the virtualization platform, but also the cluster’s virtual network and support services.
Topics covered:
- Setting up
bhyve
virtualization - Creating a custom CBSD configuration for creating our cluster’s VMs
- Configuring the FreeBSD firewall, DNS, and routing for cluster networking
You can find custom files and examples in my freebernetes
repo.
This tutorial is based on a series of meandering posts I wrote about my original experiments working through the tutorial on top of FreeBSD. You can read them starting here.
Caveats
Note that Kubernetes the Hard Way is one of the most manual ways to create a Kubernetes cluster, so while it’s great for understanding what all the piece are in a cluster and how they fit together, it’s not the most practical method for most users.
This tutorial covers installing the cluster on a single FreeBSD host. You will end up with a fully-functional cluster, suitable for learning how to use Kubernetes, testing applications, running your containers, etc. However, it’s not suitable for most production uses, because of a lack of redundancy and security hardening.
This tutorial is not a Kubernetes user tutorial. It won’t spend time defining terms or providing deep explanations of concepts. For that, you should start with the official Kubernetes documentation.
Intended Audience
For this tutorial, you don’t need to know anything about Kubernetes. You do need to have a host with FreeBSD installed; an understanding of basic FreeBSD system administration tasks, such as installing software from FreeBSD ports or packages, and loading kernel modules; and familiarity with csh
or sh
. Experience with FreeBSD bhyve
virtual machines and the CBSD interface is useful but not required.
Host (FreeBSD Hypervisor) Requirements
- Hardware, physical or virtual
- CPU
- The guest VMs have a shared total of 12 “CPUs” between them, but these do not have to map to actual host CPUs. My system has a total of 8 cores, for example
- CPUs must support FreeBSD bhyve virtualization (see the FreeBSD Handbook page on bhyve for compatible CPUs)
- RAM: at least 18Gb; 30Gb+ preferred
- Free disk space: at least 100Gb
- CPU
- Operating system
- FreeBSD: 13.0-CURRENT. It may work with 12.0-RELEASE, but it haas not been tested
- File system: ZFS. It could work with UFS with user modifications
Test System
- Hardware
- CPUs: Intel(R) Core(TM) i5-6260U: 4 CPUs, 2 cores each
- RAM: 32Gb
- Operating system
- FreeBSD 13.0-HEAD-f659bf1d31c
Page 1: Prerequisites
You can skip this section in Kubernetes the Hard Way.
What a fantastic and interesting job you’ve done! I will definitely try!
Question – as far as I understand, you are not using any K8S CNI ( calico, flannel, … ). How your cluster works with multiple nodes ( ip address for pod, connectivity ? )
LikeLike
It is actually using a CNI plugin (https://github.com/containernetworking/plugins) although it just creates a basic bridge for the container network. Most CNI plugins should work fine on this cluster, which does actually have three worker nodes, and I’ve tested pod connectivity between nodes. A simple test for full CNI functionality would be to install Calico and test a NetworkPolicy.
LikeLike