Adventures in Freebernetes Tutorial: Build Your Own Bare-VM k3s Cluster

Part 5: Create the Cluster

  • 5.1 Install Servers
  • 5.2 Install Agents
  • 5.3 Set Up Service Load Balancing
  • 5.1 Install Servers

    We’ll create the control plane by creating the cluster on server-0, then adding server-1 and server-2 to the cluster.

    We want to load balance requests to the Kubernetes API endpoint across the three server VMs. For true high-availability, we would want to use a load balancer with liveness health checks. For this tutorial, though, we will just use a simple round-robin method using ipfw rules for a virtual IP address, 10.0.0.2.

    We will use the virtual IP for the Kubernetes API endpoint as we build out the cluster. This method allows us to configure K3s to use that endpoint for connections to the API endpoint without hitting a host which has not yet been bootstrapped.

    We also have to add the virtual IP address to the primary interface on each server VM so it will accept traffic for the VIP. This change won’t persist over reboots unless you add the second IP address to /etc/netplan/50-cloud-init.yaml on each server VM.

    Note that the ipfw firewall rules we’re adding will not persist across reboots. See the ipfw chapter about how to create and enable a firewall script.

    ~ # ssh -o StrictHostKeyChecking=no -i ~cbsd/.ssh/id_rsa ubuntu@10.0.10.10 sudo ip address add 10.0.0.2/32 dev enp0s5:1
    ~ # ipfw add 300 fwd 10.0.10.10 ip from any to 10.0.0.2 keep-state
    00300 fwd 10.0.10.10 ip from any to 10.0.0.2 keep-state :default
    ~ # k3sup install \
    –host server-0 \
    –user ubuntu \
    –cluster \
    –k3s-channel stable \
    –ssh-key ~cbsd/.ssh/id_rsa \
    –k3s-extra-args '–cluster-cidr 10.1.0.0/16 –service-cidr 10.2.0.0/16 –cluster-dns 10.2.0.10'
    Running: k3sup install
    2020/12/25 17:18:38 server-0
    Public IP: server-0
    [INFO] Finding release for channel stable
    [INFO] Using v1.19.5+k3s2 as release
    [INFO] Downloading hash https://github.com/rancher/k3s/releases/download/v1.19.5+k3s2/sha256sum-amd64.txt
    [INFO] Downloading binary https://github.com/rancher/k3s/releases/download/v1.19.5+k3s2/k3s
    [INFO] Verifying binary download
    [INFO] Installing k3s to /usr/local/bin/k3s
    [INFO] Creating /usr/local/bin/kubectl symlink to k3s
    [INFO] Creating /usr/local/bin/crictl symlink to k3s
    [INFO] Creating /usr/local/bin/ctr symlink to k3s
    [INFO] Creating killall script /usr/local/bin/k3s-killall.sh
    [INFO] Creating uninstall script /usr/local/bin/k3s-uninstall.sh
    [INFO] env: Creating environment file /etc/systemd/system/k3s.service.env
    [INFO] systemd: Creating service file /etc/systemd/system/k3s.service
    [INFO] systemd: Enabling k3s unit
    Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.
    [INFO] systemd: Starting k3s
    Result: [INFO] Finding release for channel stable
    [INFO] Using v1.19.5+k3s2 as release
    [INFO] Downloading hash https://github.com/rancher/k3s/releases/download/v1.19.5+k3s2/sha256sum-amd64.txt
    [INFO] Downloading binary https://github.com/rancher/k3s/releases/download/v1.19.5+k3s2/k3s
    [INFO] Verifying binary download
    [INFO] Installing k3s to /usr/local/bin/k3s
    [INFO] Creating /usr/local/bin/kubectl symlink to k3s
    [INFO] Creating /usr/local/bin/crictl symlink to k3s
    [INFO] Creating /usr/local/bin/ctr symlink to k3s
    [INFO] Creating killall script /usr/local/bin/k3s-killall.sh
    [INFO] Creating uninstall script /usr/local/bin/k3s-uninstall.sh
    [INFO] env: Creating environment file /etc/systemd/system/k3s.service.env
    [INFO] systemd: Creating service file /etc/systemd/system/k3s.service
    [INFO] systemd: Enabling k3s unit
    [INFO] systemd: Starting k3s
    Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.
    Saving file to: /root/kubeconfig
    # Test your cluster with:
    export KUBECONFIG=/root/kubeconfig
    kubectl config set-context default
    kubectl get node -o wide
    ~ # k3sup join \
    –host server-1 \
    –user ubuntu \
    –server \
    –server-host kubernetes.k3s.local \
    –server-user ubuntu \
    –k3s-channel stable \
    –ssh-key ~cbsd/.ssh/id_rsa \
    –k3s-extra-args '–cluster-cidr 10.1.0.0/16 –service-cidr 10.2.0.0/16 –cluster-dns 10.2.0.10'
    Running: k3sup join
    Server IP: kubernetes.k3s.local
    K1094729103bf24c9e6fc312577915112324a2a3d940ac670f87cb7d8de8804625f::server:19433d7161596d0dac7d9ec13a5a91e3
    [INFO] Finding release for channel stable
    [INFO] Using v1.19.5+k3s2 as release
    [INFO] Downloading hash https://github.com/rancher/k3s/releases/download/v1.19.5+k3s2/sha256sum-amd64.txt
    [INFO] Downloading binary https://github.com/rancher/k3s/releases/download/v1.19.5+k3s2/k3s
    [INFO] Verifying binary download
    [INFO] Installing k3s to /usr/local/bin/k3s
    [INFO] Creating /usr/local/bin/kubectl symlink to k3s
    [INFO] Creating /usr/local/bin/crictl symlink to k3s
    [INFO] Creating /usr/local/bin/ctr symlink to k3s
    [INFO] Creating killall script /usr/local/bin/k3s-killall.sh
    [INFO] Creating uninstall script /usr/local/bin/k3s-uninstall.sh
    [INFO] env: Creating environment file /etc/systemd/system/k3s.service.env
    [INFO] systemd: Creating service file /etc/systemd/system/k3s.service
    [INFO] systemd: Enabling k3s unit
    Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.
    [INFO] systemd: Starting k3s
    Logs: Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.
    Output: [INFO] Finding release for channel stable
    [INFO] Using v1.19.5+k3s2 as release
    [INFO] Downloading hash https://github.com/rancher/k3s/releases/download/v1.19.5+k3s2/sha256sum-amd64.txt
    [INFO] Downloading binary https://github.com/rancher/k3s/releases/download/v1.19.5+k3s2/k3s
    [INFO] Verifying binary download
    [INFO] Installing k3s to /usr/local/bin/k3s
    [INFO] Creating /usr/local/bin/kubectl symlink to k3s
    [INFO] Creating /usr/local/bin/crictl symlink to k3s
    [INFO] Creating /usr/local/bin/ctr symlink to k3s
    [INFO] Creating killall script /usr/local/bin/k3s-killall.sh
    [INFO] Creating uninstall script /usr/local/bin/k3s-uninstall.sh
    [INFO] env: Creating environment file /etc/systemd/system/k3s.service.env
    [INFO] systemd: Creating service file /etc/systemd/system/k3s.service
    [INFO] systemd: Enabling k3s unit
    [INFO] systemd: Starting k3s
    [ repeat for server-2]
    ~ # ssh -o StrictHostKeyChecking=no -i ~cbsd/.ssh/id_rsa ubuntu@10.0.10.11 sudo ip address add 10.0.0.2/32 dev enp0s5:1
    ~ # ssh -o StrictHostKeyChecking=no -i ~cbsd/.ssh/id_rsa ubuntu@10.0.10.12 sudo ip address add 10.0.0.2/32 dev enp0s5:1
    ~ # export KUBECONFIG=/root/kubeconfig
    ~ # kubectl config set-context default
    Context "default" modified.
    ~ # sed -I "" -e 's/127.0.0.1/10.0.0.2/' $KUBECONFIG
    ~ # kubectl get nodes -o wide
    NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
    server-0 Ready etcd,master 15m v1.19.5+k3s2 10.0.10.10 <none> Ubuntu 20.04.1 LTS 5.4.0-42-generic containerd://1.4.3-k3s1
    server-1 Ready etcd,master 19s v1.19.5+k3s2 10.0.10.11 <none> Ubuntu 20.04.1 LTS 5.4.0-42-generic containerd://1.4.3-k3s1
    server-2 Ready etcd,master 9m20s v1.19.5+k3s2 10.0.10.12 <none> Ubuntu 20.04.1 LTS 5.4.0-42-generic containerd://1.4.3-k3s1
    ~ #
    view raw commands + output hosted with ❤ by GitHub
    # Create VIP on server-0
    ssh -o StrictHostKeyChecking=no -i ~cbsd/.ssh/id_rsa ubuntu@10.0.10.10 sudo ip address add 10.0.0.2/32 dev enp0s5:1
    ipfw add 300 fwd 10.0.10.10 ip from any to 10.0.0.2 keep-state
    k3sup install \
    –host server-0 \
    –user ubuntu \
    –cluster \
    –k3s-channel stable \
    –ssh-key ~cbsd/.ssh/id_rsa \
    –k3s-extra-args '–cluster-cidr 10.1.0.0/16 –service-cidr 10.2.0.0/16 –cluster-dns 10.2.0.10'
    k3sup join \
    –host server-1 \
    –user ubuntu \
    –server \
    –server-host kubernetes.k3s.local \
    –server-user ubuntu \
    –k3s-channel stable \
    –ssh-key ~cbsd/.ssh/id_rsa \
    –k3s-extra-args '–cluster-cidr 10.1.0.0/16 –service-cidr 10.2.0.0/16 –cluster-dns 10.2.0.10'
    k3sup join \
    –host server-2 \
    –user ubuntu \
    –server \
    –server-host kubernetes.k3s.local \
    –server-user ubuntu \
    –k3s-channel stable \
    –ssh-key ~cbsd/.ssh/id_rsa \
    –k3s-extra-args '–cluster-cidr 10.1.0.0/16 –service-cidr 10.2.0.0/16 –cluster-dns 10.2.0.10'
    # Create VIPs on server-1 and server-2
    ssh -o StrictHostKeyChecking=no -i ~cbsd/.ssh/id_rsa ubuntu@10.0.10.11 sudo ip address add 10.0.0.2/32 dev enp0s5:1
    ssh -o StrictHostKeyChecking=no -i ~cbsd/.ssh/id_rsa ubuntu@10.0.10.12 sudo ip address add 10.0.0.2/32 dev enp0s5:1
    export KUBECONFIG=/root/kubeconfig
    kubectl config set-context default
    # In case the server endpoint is set to localhost, we'll change it to our VIP
    sed -I "" -e 's/127.0.0.1/10.0.0.2/' $KUBECONFIG
    kubectl get nodes -o wide
    view raw commands.sh hosted with ❤ by GitHub

    5.2 Install Agents

    ~ # for i in 0 1 2; do
    k3sup join \
    –host agent-$i \
    –user ubuntu \
    –server-host kubernetes.k3s.local \
    –server-user ubuntu \
    –k3s-channel stable \
    –ssh-key ~cbsd/.ssh/id_rsa
    done
    Running: k3sup join
    Server IP: kubernetes.k3s.local
    K1094729103bf24c9e6fc312577915112324a2a3d940ac670f87cb7d8de8804625f::server:19433d7161596d0dac7d9ec13a5a91e3
    [INFO] Finding release for channel stable
    [INFO] Using v1.19.5+k3s2 as release
    [INFO] Downloading hash https://github.com/rancher/k3s/releases/download/v1.19.5+k3s2/sha256sum-amd64.txt
    [INFO] Downloading binary https://github.com/rancher/k3s/releases/download/v1.19.5+k3s2/k3s
    [INFO] Verifying binary download
    [INFO] Installing k3s to /usr/local/bin/k3s
    [INFO] Creating /usr/local/bin/kubectl symlink to k3s
    [INFO] Creating /usr/local/bin/crictl symlink to k3s
    [INFO] Creating /usr/local/bin/ctr symlink to k3s
    [INFO] Creating killall script /usr/local/bin/k3s-killall.sh
    [INFO] Creating uninstall script /usr/local/bin/k3s-agent-uninstall.sh
    [INFO] env: Creating environment file /etc/systemd/system/k3s-agent.service.env
    [INFO] systemd: Creating service file /etc/systemd/system/k3s-agent.service
    [INFO] systemd: Enabling k3s-agent unit
    Created symlink /etc/systemd/system/multi-user.target.wants/k3s-agent.service → /etc/systemd/system/k3s-agent.service.
    [INFO] systemd: Starting k3s-agent
    Logs: Created symlink /etc/systemd/system/multi-user.target.wants/k3s-agent.service → /etc/systemd/system/k3s-agent.service.
    [ repeats for agent-1 and agent-2 ]
    ~ # ipfw delete 300
    ~ # ipfw add 300 prob 0.33 fwd 10.0.10.10 ip from any to 10.0.0.2 keep-state
    00300 prob 0.330000 fwd 10.0.10.10 ip from any to 10.0.0.2 keep-state :default
    ~ # ipfw add 301 prob 0.5 fwd 10.0.10.11 ip from any to 10.0.0.2 keep-state
    00301 prob 0.500000 fwd 10.0.10.11 ip from any to 10.0.0.2 keep-state :default
    ~ # ipfw add 302 fwd 10.0.10.12 ip from any to 10.0.0.2 keep-state
    00302 fwd 10.0.10.12 ip from any to 10.0.0.2 keep-state :default
    ~ # kubectl get nodes -o wide
    NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
    agent-0 Ready <none> 2m43s v1.19.5+k3s2 10.0.10.20 <none> Ubuntu 20.04.1 LTS 5.4.0-42-generic containerd://1.4.3-k3s1
    agent-1 Ready <none> 2m35s v1.19.5+k3s2 10.0.10.21 <none> Ubuntu 20.04.1 LTS 5.4.0-42-generic containerd://1.4.3-k3s1
    agent-2 Ready <none> 2m26s v1.19.5+k3s2 10.0.10.22 <none> Ubuntu 20.04.1 LTS 5.4.0-42-generic containerd://1.4.3-k3s1
    server-0 Ready etcd,master 88m v1.19.5+k3s2 10.0.10.10 <none> Ubuntu 20.04.1 LTS 5.4.0-42-generic containerd://1.4.3-k3s1
    server-1 Ready etcd,master 73m v1.19.5+k3s2 10.0.10.11 <none> Ubuntu 20.04.1 LTS 5.4.0-42-generic containerd://1.4.3-k3s1
    server-2 Ready etcd,master 82m v1.19.5+k3s2 10.0.10.12 <none> Ubuntu 20.04.1 LTS 5.4.0-42-generic containerd://1.4.3-k3s1
    ~ #
    view raw commands + output hosted with ❤ by GitHub
    for i in 0 1 2; do
    k3sup join \
    –host agent-$i \
    –user ubuntu \
    –server-host kubernetes.k3s.local \
    –server-user ubuntu \
    –k3s-channel stable \
    –ssh-key ~cbsd/.ssh/id_rsa
    done
    # Remove temporary firewall rule
    ipfw delete 300
    # Create round-robin firewall rules
    ipfw add 300 prob 0.33 fwd 10.0.10.10 ip from any to 10.0.0.2 keep-state
    ipfw add 301 prob 0.5 fwd 10.0.10.11 ip from any to 10.0.0.2 keep-state
    ipfw add 302 fwd 10.0.10.12 ip from any to 10.0.0.2 keep-state
    kubectl get nodes -o wide
    view raw commands.sh hosted with ❤ by GitHub

    5.3 Set Up Service Load Balancing

    Generally, if you want to expose a Kubernetes application endpoint on an IP address outside the cluster’s network, you would create a Service object of type LoadBalancer. However, because load balancer options and implementations are unique for each cloud provider and self-hosted environment, Kubernetes expects you to have a controller running in your cluster to manage service load balancers. We have no such controller for our FreeBSD hypervisor, but we have a couple basic alternatives.

    5.3.1 Routing to NodePort Services

    For Services of type NodePort, we can route directly to the Service‘s virtual IP, which will be in our 10.2.0.0/16 service network block. Each service VIP is routeable by every node, so if we set up round-robin forwarding rules on the hypervisor’s firewall, we should be able to reach NodePort endpoints.

    Note that the ipfw firewall rules we’re adding will not persist across reboots. See the ipfw chapter about how to create and enable a firewall script.

    ~ # ipfw add 350 prob 0.333 fwd 10.0.10.20 ip from any to 10.2.0.0/16 keep-state
    00350 prob 0.333000 fwd 10.0.10.20 ip from any to 10.2.0.0/16 keep-state :default
    ~ # ipfw add 351 prob 0.5 fwd 10.0.10.21 ip from any to 10.2.0.0/16 keep-state
    00351 prob 0.500000 fwd 10.0.10.21 ip from any to 10.2.0.0/16 keep-state :default
    ~ # ipfw add 352 fwd 10.0.10.22 ip from any to 10.2.0.0/16 keep-state
    00352 fwd 10.0.10.22 ip from any to 10.2.0.0/16 keep-state :default
    view raw commands + output hosted with ❤ by GitHub
    ipfw add 350 prob 0.333 fwd 10.0.10.20 ip from any to 10.2.0.0/16 keep-state
    ipfw add 351 prob 0.5 fwd 10.0.10.21 ip from any to 10.2.0.0/16 keep-state
    ipfw add 352 fwd 10.0.10.22 ip from any to 10.2.0.0/16 keep-state
    view raw commands.sh hosted with ❤ by GitHub

    5.3.1 K3s Service Load Balancer

    K3s has its own option for load balancer services. You can read the documentation for details. The service IP address will share the IP address of a node in the cluster. We will see a demonstration in the next section, when we test our cluster.

    Note that with the K3s service load balancer, you run the risk of being unable to create a LoadBalancer-type service because of a high risk of port collisions, which are not usually a problem with most Kubernetes LoadBalancer implementations.

    Pages: 1 2 3 4 5 6 7 8

    Leave a Reply

    Fill in your details below or click an icon to log in:

    WordPress.com Logo

    You are commenting using your WordPress.com account. Log Out /  Change )

    Google photo

    You are commenting using your Google account. Log Out /  Change )

    Twitter picture

    You are commenting using your Twitter account. Log Out /  Change )

    Facebook photo

    You are commenting using your Facebook account. Log Out /  Change )

    Connecting to %s

    Blog at WordPress.com.

    Up ↑

    %d bloggers like this: