Adventures in Freebernetes Tutorial: Build Your Own Bare-VM Kubernetes Cluster the Hard Way

Page 3: Compute Resources

  • 3.0 Clone the Freebernetes Repo
  • 3.1 Choose Your Virtual Network Layout
  • 3.2 Create the Linux VMs
  • 3.3 Configure Networking
  • 3.4 Configure Local DNS
  • In this section, we will perform the bulk of our FreeBSD-specific infrastructure configurations. You won’t need to follow any specific steps in the original.

    There are a bunch of steps here, but with few exceptions, most of the FreeBSD customization work happens here.

    3.0 Clone the Freebernetes Repo

    You can find most of the custom files used here in https://github.com/kbruner/freebernetes. This tutorial assumes you’ve cloned it to ~/src/freebernetes.

    3.1 Choose Your Virtual Network Layout

    This tutorial will use the private (RFC1918) IPv4 CIDR blocks and addresses from the original tutorial, where appropriate. These are all in the 10.0.0.0/8 block; if you have a collision in your existing network, you can use another block. Note: If you do use different IP blocks, you will need to make a number of changes to files and commands.

    I very strongly recommend using these network allocations if at all possible, both to avoid having to make frequent edits and because it will be less confusing if something does not work in the cluster.

    3.1.1 Pick a .local Zone for DNS

    This zone just needs to resolve locally on the FreeBSD host. I’m going with hardk8s.local because who doesn’t like a bad pun?

    3.2 Create the Linux VMs

    3.2.1 Initialize CBSD

    If you haven’t run CBSD on your FreeBSD host before, you will need to set it up. You can use the seed file at ~/src/freebernetes/harder-way/cbsd/initenv.conf. Edit it first to set node_name to your FreeBSD host’s name and to change jnameserver and nodeippool if you are using a private network other than 10.0.0.0/8.

    ~ # sysrc cbsd_workdir="/usr/cbsd"
    cbsd_workdir: -> /usr/cbsd
    ~ # cp src/freebernetes/harder-way/cbsd/initenv.conf .
    ~ # /usr/local/cbsd/sudoexec/initenv inter=0 `pwd`/initenv.conf # need full path for initenv.conf ¯\_(ツ)_/¯
    [ lots of output ]
    ~ # grep cbsd /etc/rc.conf
    cbsd_workdir="/usr/cbsd"
    cbsdrsyncd_enable="YES"
    cbsdrsyncd_flags="–config=/usr/cbsd/etc/rsyncd.conf"
    cbsdd_enable="YES"
    ~ # service cbsdrsyncd stop
    ~ # sysrc -x cbsdrsyncd_enable
    ~ # sysrc -x cbsdrsyncd_flags
    ~ # grep cbsd /etc/rc.conf
    cbsd_workdir="/usr/cbsd"
    cbsdd_enable="YES"
    view raw commands + shell hosted with ❤ by GitHub
    sysrc cbsd_workdir="/usr/cbsd"
    cp src/freebernetes/harder-way/cbsd/initenv.conf .
    /usr/local/cbsd/sudoexec/initenv inter=0 `pwd`/initenv.conf # need full path for initenv.conf ¯\_(ツ)_/¯
    service cbsdrsyncd stop
    sysrc -x cbsdrsyncd_enable
    sysrc -x cbsdrsyncd_flags
    view raw commands.sh hosted with ❤ by GitHub

    Note that CBSD version 12.2.3 seems to have a bug where it enables cbsdrsyncd even if you configure it for one node and one SQL replica only. That’s why we are disabling it and stopping the service. (Not critical, but I get annoyed by random, unused services hanging around.)

    3.2.2 Configure VM Profile

    We will use the existing CBSD cloud image for Ubuntu Linux 20.04, but we want to create our own profile (VM configuration settings and options). Copy ~/src/freebernetes/harder-way/cbsd/usr.cbsd/etc/defaults/vm-linux-cloud-ubuntuserver-kubernetes-base-amd64-20.04.conf to /usr/cbsd/etc/defaults/vm-linux-cloud-ubuntuserver-kubernetes-base-amd64-20.04.conf

    This profile uses a custom cloud-init field, ci_pod_cidr, to pass a different address block to each worker node, which they will use to assign unique IP addresses to pods. As CBSD does not know about this setting and does not support ad-hoc parameters, we’re going to update the bhyve VM creation script and use our own cloud-init template.

    cp -rp ~/src/freebernetes/harder-way/cbsd/usr.local.cbsd/ /usr/local/cbsd
    cp ~/src/freebernetes/harder-way/cbsd/usr.cbsd/etc/defaults/vm-linux-cloud-ubuntuserver-kubernetes-base-amd64-20.04.conf /usr/cbsd/etc/defaults/
    view raw commands.sh hosted with ❤ by GitHub

    3.2.3 Create VMs

    Copy ~/src/freebernetes/harder-way/cbsd/instance.jconf and update ci_gw4, ci_nameserver_search, and ci_nameserver_address as needed. If you want to set a password for the ubuntu user in case you want to log in on the console via VNC, you can assign it to cw_user_pw_user, but note this is a plain-text field.

    When you run cbsd bcreate, if CBSD does not have a copy of the installation ISO image, it will prompt you asking to download it. After the first time, it will re-use the local image.

    ~ # for i in 0 1 2; do
    cbsd bcreate jconf=/root/instance.jconf jname="controller-$i" \
    ci_ip4_addr="10.240.0.1${i}/8" ci_jname="controller-$i" \
    ci_fqdn="controller-${i}.hardk8s.local" ip_addr="10.240.0.1${i}" \
    imgsize="30g" vm_cpus="2" vm_ram="8g"
    done
    Global VM ZFS guid: 18240384265212679365
    To edit VM properties use: cbsd bconfig jname=controller-0
    To start VM use: cbsd bstart controller-0
    To stop VM use: cbsd bstop controller-0
    To remove VM use: cbsd bremove controller-0
    For attach VM console use: cbsd blogin controller-0
    Creating controller-0 complete: Enjoy!
    auto-generate cloud-init settings: /usr/cbsd/jails-system/controller-0/cloud-init
    [ similar output for controller-1 and controller-2 ]
    ~ # for i in 0 1 2; do cbsd bstart jname="controller-$i"; done
    cloud-init: enabled
    Looks like /usr/cbsd/vm/controller-0/dsk1.vhd is empty.
    May be you want to boot from CD?
    [yes(1) or no(0)]
    1
    Temporary boot device: cd
    vm_iso_path: 0
    No such media: /usr/cbsd/src/iso/cbsd-cloud-cloud-Ubuntu-x86-20.04.1.raw in /usr/cbsd/src/iso
    Shall i download it from: https://mirror.bsdstore.ru/cloud/?
    [yes(1) or no(0)]
    1
    Download to: /usr/cbsd/src/iso/cbsd-cloud-cloud-Ubuntu-x86-20.04.1.raw
    [ download output skipped ]
    Eject cloud source: media mode=detach name=cloud-ubuntu-x86-20.04.1 path=/usr/cbsd/src/iso/cbsd-cloud-cloud-Ubuntu-x86-20.04.1.raw type=iso jname=controller-0
    DELETE FROM media WHERE name="cloud-ubuntu-x86-20.04.1" AND path="/usr/cbsd/src/iso/cbsd-cloud-cloud-Ubuntu-x86-20.04.1.raw" AND jname="controller-0"
    vm_iso_path: changed
    Detach to: controller-0
    All CD/ISO ejected: controller-0
    VRDP is enabled. VNC bind/port: 127.0.0.1:5900
    For attach VM console, use: vncviewer 127.0.0.1:5900
    Resolution: 1024×768.
    bhyve renice: 1
    Waiting for PID.
    PID: 76286
    CBSD setup: bhyve ipfw counters num: 99/100
    [ similar output for controller-1 and controller-2 ]
    ~ # for i in 0 1 2; do
    cbsd bcreate jconf=/root/instance.jconf jname="worker-$i" \
    ci_ip4_addr="10.240.0.2${i}/24" ci_jname="worker-$i" \
    ci_fqdn="worker-${i}.hardk8s.local" ip_addr="10.240.0.2${i}" \
    ci_pod_cidr="10.200.${i}.0/24"
    done
    Global VM ZFS guid: 11960195993976622261
    To edit VM properties use: cbsd bconfig jname=worker-0
    To start VM use: cbsd bstart worker-0
    To stop VM use: cbsd bstop worker-0
    To remove VM use: cbsd bremove worker-0
    For attach VM console use: cbsd blogin worker-0
    Creating worker-0 complete: Enjoy!
    auto-generate cloud-init settings: /usr/cbsd/jails-system/worker-0/cloud-init
    [ similar output for worker-1 and worker-2 ]
    ~ # for i in 0 1 2; do cbsd bstart jname="worker-$i"; done
    [ similar output to controllers above ]
    ~ # cbsd bls
    JNAME JID VM_RAM VM_CURMEM VM_CPUS PCPU VM_OS_TYPE IP4_ADDR STATUS VNC
    controller-0 28798 8192 511 2 10 linux 10.240.0.10 On 127.0.0.1:5900
    controller-1 30446 8192 506 2 14 linux 10.240.0.11 On 127.0.0.1:5901
    controller-2 32153 8192 511 2 28 linux 10.240.0.12 On 127.0.0.1:5902
    worker-0 8967 4096 868 1 0 linux 10.240.0.20 On 127.0.0.1:5903
    worker-1 10657 4096 997 1 27 linux 10.240.0.21 On 127.0.0.1:5904
    worker-2 12555 4096 975 1 28 linux 10.240.0.22 On 127.0.0.1:5905
    view raw commands + output hosted with ❤ by GitHub
    # Prepare controller VMs — does not boot the VM
    for i in 0 1 2; do
    cbsd bcreate jconf=/root/instance.jconf jname="controller-$i" \
    ci_ip4_addr="10.240.0.1${i}/8" ci_jname="controller-$i" \
    ci_fqdn="controller-${i}.hardk8s.local" ip_addr="10.240.0.1${i}" \
    imgsize="30g" vm_cpus="2" vm_ram="8g"
    done
    # Boot the controller VMs
    for i in 0 1 2; do cbsd bstart jname="controller-$i"; done
    # Prepare the worker VMs — note the additional ci_pod_cidr parameter
    # Disk, cpu, and RAM settings use defaults in instance.jconf
    for i in 0 1 2; do
    cbsd bcreate jconf=/root/instance.jconf jname="worker-$i" \
    ci_ip4_addr="10.240.0.2${i}/24" ci_jname="worker-$i" \
    ci_fqdn="worker-${i}.hardk8s.local" ip_addr="10.240.0.2${i}" \
    ci_pod_cidr="10.200.${i}.0/24"
    done
    # Boot the worker VMs
    for i in 0 1 2; do cbsd bstart jname="worker-$i"; done
    view raw commands-only.sh hosted with ❤ by GitHub

    3.3 Configure Networking

    Note that you cannot yet connect to the VMs. CBSD creates a bridge interface the first time you create a VM, and we need to add our gateways to that interface. In most cases, CBSD will use the bridge1 interface.

    The 10.0.0.1/32 does not act as a gateway for any of the subnets, but we’re using it as the DNS server endpoint for the virtual networks.

    3.3.1 Add Bridge Gateways

    ~ # ifconfig bridge1 alias 10.0.0.1/32
    ~ # ifconfig bridge1 alias 10.32.0.1/24
    ~ # ifconfig bridge1 alias 10.200.0.1/16
    ~ # ifconfig bridge1 alias 10.240.0.1/24
    ~ # ifconfig bridge1
    bridge1: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
    description: em0
    ether 58:9c:fc:10:ff:b8
    inet 10.0.0.1 netmask 0xff000000 broadcast 10.255.255.255
    inet 10.32.0.1 netmask 0xffffff00 broadcast 10.32.0.255
    inet 10.200.0.1 netmask 0xffff0000 broadcast 10.200.255.255
    inet 10.240.0.1 netmask 0xffffff00 broadcast 10.240.0.255
    id 00:00:00:00:00:00 priority 32768 hellotime 2 fwddelay 15
    maxage 20 holdcnt 6 proto stp-rstp maxaddr 2000 timeout 1200
    root id 00:00:00:00:00:00 priority 32768 ifcost 0 port 0
    member: tap4 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
    ifmaxaddr 0 port 7 priority 128 path cost 2000000
    member: tap3 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
    ifmaxaddr 0 port 6 priority 128 path cost 2000000
    member: tap2 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
    ifmaxaddr 0 port 5 priority 128 path cost 2000000
    member: tap7 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
    ifmaxaddr 0 port 10 priority 128 path cost 2000000
    member: tap6 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
    ifmaxaddr 0 port 9 priority 128 path cost 2000000
    member: tap5 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
    ifmaxaddr 0 port 8 priority 128 path cost 2000000
    member: tap1 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
    ifmaxaddr 0 port 4 priority 128 path cost 2000000
    member: em0 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
    ifmaxaddr 0 port 1 priority 128 path cost 20000
    groups: bridge
    nd6 options=9<PERFORMNUD,IFDISABLED>
    view raw commands + output hosted with ❤ by GitHub
    ifconfig bridge1 alias 10.0.0.1/32
    ifconfig bridge1 alias 10.32.0.1/24
    ifconfig bridge1 alias 10.200.0.1/16
    ifconfig bridge1 alias 10.240.0.1/24
    view raw commands.sh hosted with ❤ by GitHub

    Note that these changes will not survive across reboots. I have not tested if adding a persistent entry for bridge1 in /etc/rc.conf would work as expected with CBSD.

    3.3.2 Configure NAT

    We can reach our VM just fine from the host, but the VMs can’t talk to the Internet because only the FreeBSD host can route to this 10.0.0.0/8 block. We will use ipfw as a NAT (Network Address Translation) service. These steps will enable ipfw with open firewall rules and then configure the NAT. These changes will take effect immediately and persist across reboots.

    Note that my host’s physical interface is named em0. You may have to alter some commands if yours has a different name.

    ~ # kenv net.inet.ip.fw.default_to_accept=1
    net.inet.ip.fw.default_to_accept="1"
    ~ # echo net.inet.ip.fw.default_to_accept=1 >> /boot/loader.conf
    ~ # sysrc firewall_enable="YES"
    firewall_enable: NO -> YES
    ~ # sysrc gateway_enable="YES"
    gateway_enable: NO -> YES
    ~ # sysrc firewall_nat_enable="YES"
    firewall_nat_enable: NO -> YES
    ~ # sysctl net.inet.tcp.tso=0
    net.inet.tcp.tso: 0 -> 0
    ~ # echo net.inet.tcp.tso="0" >> /etc/sysctl.conf
    ~ # service ipfw start
    Firewall logging enabled.
    ~ # ipfw disable one_pass
    ~ # ipfw -q nat 1 config if em0 same_ports unreg_only reset
    ~ # sysctl net.inet.ip.fw.enable=1
    net.inet.ip.fw.enable: 0 -> 1
    ~ # sysctl net.inet.ip.forwarding=1
    net.inet.ip.forwarding: 0 -> 1
    ~ # sysctl net.inet6.ip6.forwarding=1
    net.inet6.ip6.forwarding: 0 -> 1
    ~ # ipfw add 1 allow ip from any to any via lo0
    00001 allow ip from any to any via lo0
    ~ # ipfw add 200 reass all from any to any in
    00200 reass ip from any to any in
    ~ # ipfw add 201 check-state
    00201 check-state :default
    ~ # ipfw add 205 nat 1 ip from 10.0.0.0/8 to any out via em0
    00205 nat 1 ip from 10.0.0.0/8 to any out via em0
    ~ # ipfw add 210 nat 1 ip from any to any in via em0
    00210 nat 1 ip from any to any in via em0
    ~ # ipfw show
    00001 0 0 allow ip from any to any via lo0
    00200 2689 197170 reass ip from any to any in
    00201 0 0 check-state :default
    00205 0 0 nat 1 ip from 10.0.0.0/8 to any out via em0
    00210 46 3188 nat 1 ip from any to any in via em0
    65535 106815 10861896 allow ip from any to any
    view raw commands + output hosted with ❤ by GitHub
    kenv net.inet.ip.fw.default_to_accept=1
    echo net.inet.ip.fw.default_to_accept=1 >> /boot/loader.conf
    sysrc firewall_enable="YES"
    sysrc gateway_enable="YES"
    sysrc firewall_nat_enable="YES"
    sysctl net.inet.tcp.tso=0
    echo net.inet.tcp.tso="0" >> /etc/sysctl.conf
    service ipfw start
    ipfw disable one_pass
    ipfw -q nat 1 config if em0 same_ports unreg_only reset
    sysctl net.inet.ip.fw.enable=1
    sysctl net.inet.ip.forwarding=1
    sysctl net.inet6.ip6.forwarding=1
    ipfw add 1 allow ip from any to any via lo0
    ipfw add 200 reass all from any to any in
    ipfw add 201 check-state
    ipfw add 205 nat 1 ip from 10.0.0.0/8 to any out via em0
    ipfw add 210 nat 1 ip from any to any in via em0
    view raw commands.sh hosted with ❤ by GitHub

    3.4 Configure Local DNS

    We need a way to resolve our VM host names. We need to pick a private .local DNS domain, configure an authoritative server for the domain, and then set up a local caching server that knows about our domain but can also still resolve external addresses for us. We will follow this nsd/unbound tutorial closely.

    3.4.1 Enable unbound for recursive/caching DNS

    FreeBSD has a caching (lookup-only) DNS service called unbound in the base system. It will use the nameservers configured in the local /etc/resolv.conf for external address lookups and the local nsd service (configured next) for lookups to our private zone. Copy unbound.conf and make any edits as necessary to IP addresses or your local zone name.

    You will also want to update the FreeBSD host’s /etc/resolv.conf to add your local domain to the search list and add an entry for nameserver 127.0.0.1.

    cp ~/src/freebernetes/harder-way/dns/unbound/unbound.conf /etc/unbound/unbound.conf
    sysrc local_unbound_enable="YES"
    service local_unbound start
    view raw commands.sh hosted with ❤ by GitHub

    3.4.2 Configure the Authoritative DNS Service

    We will use nsd, a lightweight, authoritative-only service, for our local zone. After copying the files, you can edit/rename the copied files before proceeding to make changes as necessary to match your local domain or IP addresses.

    ~ # mkdir -p /var/nsd/var/db/nsd /var/nsd/var/run /var/nsd/var/log /var/nsd/tmp
    ~ # chown -R nsd:nsd /var/nsd
    ~ # sysrc nsd_enable="YES"
    nsd_enable: -> YES
    ~ # sysrc nsd_config="/var/nsd/nsd.conf"
    nsd_config: -> /var/nsd/nsd.conf
    ~ # cp ~/src/freebernetes/harder-way/dns/nsd/* /var/nsd/
    ~ # nsd-control-setup -d /var/nsd
    setup in directory /var/nsd
    Generating RSA private key, 3072 bit long modulus (2 primes)
    ……………..++++
    …………………………………………………………………………………++++
    e is 65537 (0x010001)
    Generating RSA private key, 3072 bit long modulus (2 primes)
    .++++
    ……………….++++
    e is 65537 (0x010001)
    Signature ok
    subject=CN = nsd-control
    Getting CA Private Key
    removing artifacts
    Setup success. Certificates created. Enable in nsd.conf file to use
    ~ # nsd-control -c /var/nsd/nsd.conf start
    [2020-12-07 19:20:00.892] nsd[18116]: notice: nsd starting (NSD 4.3.3)
    [2020-12-07 19:20:00.892] nsd[18116]: notice: listen on ip-address 127.0.0.1@53530 (udp) with server(s): *
    [2020-12-07 19:20:00.892] nsd[18116]: notice: listen on ip-address 127.0.0.1@53530 (tcp) with server(s): *
    view raw commands + output hosted with ❤ by GitHub
    mkdir -p /var/nsd/var/db/nsd /var/nsd/var/run /var/nsd/var/log /var/nsd/tmp
    chown -R nsd:nsd /var/nsd
    sysrc nsd_enable="YES"
    sysrc nsd_config="/var/nsd/nsd.conf"
    cp ~/src/freebernetes/harder-way/dns/nsd/* /var/nsd/
    nsd-control-setup -d /var/nsd
    nsd-control -c /var/nsd/nsd.conf start
    view raw commands.sh hosted with ❤ by GitHub

    Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13 14

    3 thoughts on “Adventures in Freebernetes Tutorial: Build Your Own Bare-VM Kubernetes Cluster the Hard Way

    Add yours

    1. What a fantastic and interesting job you’ve done! I will definitely try!
      Question – as far as I understand, you are not using any K8S CNI ( calico, flannel, … ). How your cluster works with multiple nodes ( ip address for pod, connectivity ? )

      Like

      1. It is actually using a CNI plugin (https://github.com/containernetworking/plugins) although it just creates a basic bridge for the container network. Most CNI plugins should work fine on this cluster, which does actually have three worker nodes, and I’ve tested pod connectivity between nodes. A simple test for full CNI functionality would be to install Calico and test a NetworkPolicy.

        Like

    Leave a Reply

    Fill in your details below or click an icon to log in:

    WordPress.com Logo

    You are commenting using your WordPress.com account. Log Out /  Change )

    Google photo

    You are commenting using your Google account. Log Out /  Change )

    Twitter picture

    You are commenting using your Twitter account. Log Out /  Change )

    Facebook photo

    You are commenting using your Facebook account. Log Out /  Change )

    Connecting to %s

    Blog at WordPress.com.

    Up ↑

    %d bloggers like this: