Adventures in Freebernetes: Cloudy with a Chance of Rain

Part 8 of experiments in FreeBSD and Kubernetes: Building a Custom Cloud Image in CBSD

See all posts in this series

Table of Contents

In the previous post in this series, I created a custom VM configuration so I could create Alpine Linux VMs in CBSD. That experiment went well. Next up was creating a cloud image for Alpine to allow completely automated configuration of the target VM. However, that plan hit some roadblocks and requires doing a deep dive into a new rabbit hole, documented in this post.


Great Image Bake-Off

I’m going to try to use my existing Alpine VM to install cloud-init from the edge branch (I have no idea whether it’s compatible, but I guess we will find out). The Alpine package tool, apk, doesn’t seem to support specifying packages for a different branch than the one installed, so I uncomment the edge repositories in the apk configuration.

alpine1:/etc/apk# vi repositories
alpine1:/etc/apk# apk update
fetch http://sjc.edge.kernel.org/alpine/v3.12/main/x86_64/APKINDEX.tar.gz
fetch http://sjc.edge.kernel.org/alpine/v3.12/community/x86_64/APKINDEX.tar.gz
fetch http://sjc.edge.kernel.org/alpine/edge/main/x86_64/APKINDEX.tar.gz
fetch http://sjc.edge.kernel.org/alpine/edge/community/x86_64/APKINDEX.tar.gz
fetch http://sjc.edge.kernel.org/alpine/edge/testing/x86_64/APKINDEX.tar.gz
v3.12.1-32-g3dc1dba8df [http://sjc.edge.kernel.org/alpine/v3.12/main]
v3.12.1-33-ge462514615 [http://sjc.edge.kernel.org/alpine/v3.12/community]
v20200917-3814-gbddefb148b [http://sjc.edge.kernel.org/alpine/edge/main]
v20200917-3814-gbddefb148b [http://sjc.edge.kernel.org/alpine/edge/community]
v20200917-3810-g1ccc4c3b29 [http://sjc.edge.kernel.org/alpine/edge/testing]
OK: 29824 distinct packages available
alpine1:/etc/apk# apk add cloud-init
(1/64) Installing blkid (2.36-r2)
(2/64) Installing libsmartcols (2.36-r2)
(3/64) Installing partx (2.36-r2)
[ lots of recursive dependencies ]
(62/64) Installing util-linux-openrc (2.36-r2)
(63/64) Installing cloud-init (20.3-r4)
Executing cloud-init-20.3-r4.post-install
Please run setup-cloud-init to enable required init.d services.
You may also want to read file /usr/share/doc/cloud-init/README.Alpine
in the cloud-init-docs package.
(64/64) Installing cloud-init-openrc (20.3-r4)
Executing busybox-1.31.1-r19.trigger
Executing eudev-3.2.9-r3.trigger
OK: 906 MiB in 206 packages
alpine1:/etc/apk#
view raw gistfile1.txt hosted with ❤ by GitHub

I can’t easily test whether it will work with rebooting and then having to clean up the markers cloud-init leaves to maintain state across reboots so it doesn’t bootstrap more than once. I will just have to test in the new VM. (I also needed to run rc-update add cloud-init default in the VM before I shut it down, but more on that later.)

I can’t find any specific docs in CBSD on how they generate their cloud images, or even what the specific format is, although this doc implies that it’s a ZFS volume.

So, I look at the raw images in /usr/cbsd/src/iso.

root@nucklehead:/usr/cbsd/src/iso # ls -l *raw*
lrwxrwxr-x 1 root cbsd 67 Nov 11 22:49 cbsd-cloud-cloud-Debian-x86-10.4.0.raw -> /dev/zvol/zroot/ROOT/default/cbsd-cloud-cloud-Debian-x86-10.4.0.raw
lrwxrwxr-x 1 root cbsd 68 Nov 11 13:25 cbsd-cloud-cloud-Ubuntu-x86-20.04.1.raw -> /dev/zvol/zroot/ROOT/default/cbsd-cloud-cloud-Ubuntu-x86-20.04.1.raw
root@nucklehead:/usr/cbsd/src/iso #
view raw gistfile1.txt hosted with ❤ by GitHub

Oh. They’re symbolic links to actually ZFS volume devices. Ok. I create the image directly from alpine1‘s ZFS volume.

root@nucklehead:~ # dd bs=4m if=/dev/zvol/zroot/ROOT/default/alpine1/dsk1.vhd of=/usr/cbsd/src/iso/cbsd-cloud-cloud-AlpineLinux-3.12.1-x8
6_64-standard.raw
1024+1 records in
1024+1 records out
4295098368 bytes transferred in 8.293736 secs (517872564 bytes/sec)
root@nucklehead:~ #
view raw gistfile1.txt hosted with ❤ by GitHub

I left the raw file in ~cbsd/src/iso to see if CBSD would import it into ZFS automatically when I run cbsd bconstruct-tui.

Cloud Seeding

Again, I have no idea if the above steps actually created a bootable image which CBSD will accept, but before I can try, I have to create the cloud VM profile’s configuration.

I have no idea what variables or custom parameters Alpine uses or supports for cloud-init and since I’ve been combing through CBSD and other docs all day, I would just as soon skip digging into cloud-init. I copied over the CBSD configuration file for a cloud Ubuntu VM profile in ~cbsd/etc/defaults and edited that.

Also, I could not find it documented, but the cloud-init template files used to populate VMs are stored in /usr/local/cbsd/modules/bsdconf.d/cloud-tpl. The Debian cloud VM I made earlier used the centos7 templates, so I will try those with Alpine.

Here’s the VM profile I create as /usr/cbsd/etc/defaults/vm-linux-cloud-AlpineLinux-3.12.1-x86_64-standard.conf:

# don't remove this line:
vm_profile="AlpineLinux-3.12.1-x86_64-standard"
vm_os_type="linux"
# this is one-string additional info strings in dialogue menu
long_description="Linux Alpine 3.12.1 cloud image"
# custom settings:
fetch=1
iso_site=""
cbsd_iso_mirrors=""
iso_img="cloud-AlpineLinux-3.12.1-x86_64-standard.raw"
iso_img_dist=""
iso_img_type="cloud"
iso_extract=""
# register_iso as:
register_iso_name="cbsd-cloud-${iso_img}"
register_iso_as="cloud-AlpineLinux-3.12.1-x86_64-standard"
default_jailname="alpine"
# disable profile?
xen_active=1
bhyve_active=1
# Available in ClonOS?
clonos_active=0
# VNC
vm_vnc_port="0"
vm_efi="uefi"
vm_package="small1"
# is template for vm_obtain
is_template=1
is_cloud=1
# Not sure if these matter if it is not downloadable -karen
sha256sum=""
iso_img_dist_size="0"
imgsize_min="0" # 5g min
# enable birtio RNG interface?
virtio_rnd="1"
## cloud-init specific settings ##
ci_template="alpine"
ci_user_pw_root='*'
ci_user_add='alpine'
ci_user_gecos_ubuntu='alpine user'
ci_user_home_ubuntu='/home/alpine'
ci_user_shell_ubuntu='/bin/bash'
ci_user_member_groups_ubuntu='root'
# YES I'M LEAVING IT BLANK -karen
ci_user_pw_ubuntu_crypt=''
ci_user_pubkey_ubuntu=".ssh/authorized_keys"
default_ci_ip4_addr="DHCP" # can be IP, e.g: 192.168.0.100
default_ci_gw4="auto" # can be IP, e.g: 192.168.0.1
ci_nameserver_address="8.8.8.8"
ci_nameserver_search="my.domain"
# apply master_prestart.d/cloud_init_set_netname.sh
ci_adjust_inteface_helper=0
ci_interface="eth0"
## cloud-init specific settings end of ##

And then I create the new VM using that profile.

Screenshot of CBSD Linux profile selections
Screenshot of CBSD interface configuring our cloud Alpine VM
root@nucklehead:~ # cbsd bstart alpine2
cloud-init: enabled
vm_iso_path: cloud-AlpineLinux-3.12.1-x86_64-standard
Original size: 4g, real referenced size/data: 625m
Converting /usr/cbsd/src/iso/cbsd-cloud-cloud-AlpineLinux-3.12.1-x86_64-standard.raw -> /dev/zvol/zroot/ROOT/default/cbsd-cloud-cloud-AlpineLinux-3.12.1-x86_64-standard.raw: 4g…
WIP: [0%…10%…22%…33%…40%…46%…55%…73%…78%…92%…99%…99%…100%]
1024+1 records in
1024+1 records out
4295098368 bytes transferred in 19.129860 secs (224523251 bytes/sec)
cloud init image initialization..
Clone cloud image into first/system vm disk (dd method)
WIP: [0%…7%…24%…33%…35%…47%…65%…68%…76%…95%…95%…99%…100%]
1024+1 records in
1024+1 records out
4295229440 bytes transferred in 26.628993 secs (161298979 bytes/sec)
Eject cloud source: media mode=detach name=cloud-AlpineLinux-3.12.1-x86_64-standard path=/usr/cbsd/src/iso/cbsd-cloud-cloud-AlpineLinux-3.12.1-x86_64-standard.raw type=iso jname=alpine2
UPDATE media SET jname='-' WHERE jname="alpine2" AND name="cloud-AlpineLinux-3.12.1-x86_64-standard" AND path="/usr/cbsd/src/iso/cbsd-cloud-cloud-AlpineLinux-3.12.1-x86_64-standard.raw"
vm_iso_path: changed
Detach to: alpine2
All CD/ISO ejected: alpine2
VRDP is enabled. VNC bind/port: 192.168.0.11:5904
For attach VM console, use: vncviewer 192.168.0.11:5904
Resolution: 1024×768.
em0
bhyve renice: 1
Waiting for PID……….
PID: 0
Wed Nov 18 11:52:17 PST 2020
cmd: env LIB9P_LOGGING=/usr/cbsd/jails-system/alpine2/cbsd_lib9p.log /usr/bin/nice -n 1 /usr/sbin/bhyve 5 bhyve_flags -c 1 -m 1073741824 -H -A -U 63194ce3-29d7-11eb-9bb0-b8aeede991dd -s 0,hostbridge -s 3,virtio-blk,/usr/cbsd/vm/alpine2/dsk1.vhd,sectorsize=512/4096 -s 4,ahci-cd,/usr/cbsd/jails-system/alpine2/seed.iso,ro -s 5,virtio-net,tap2,mtu=1500,mac=00:a0:98:23:d0:61 -s 6,virtio-rnd -s 7,fbuf,tcp=192.168.0.11:5904,w=1024,h=768 -s 30,xhci,tablet -s 31,lpc -l com1,stdio -l bootrom,/usr/local/cbsd/upgrade/patch/efi.fd alpine2
—–
Usage: bhyve [-abehuwxACDHPSWY]
[-c [[cpus=]numcpus][,sockets=n][,cores=n][,threads=n]]
[-g <gdb port>] [-l <lpc>]
[-m mem] [-p vcpu:hostcpu] [-s <pci>] [-U uuid] <vm>
[ option list omitted ]
Please use for debug: /usr/local/cbsd/share/bhyverun.sh -c /usr/cbsd/jails-system/alpine2/bhyve.conf
root@nucklehead:~ #
view raw gistfile1.txt hosted with ❤ by GitHub

The good news: CBSD did indeed import the disk file. The bad news: bhyve couldn’t start the VM.

root@nucklehead:~ # ls -l /dev/zvol/zroot/ROOT/default/cbsd-cloud-cloud*
crw-r—– 1 root operator 0x58 Nov 18 11:51 /dev/zvol/zroot/ROOT/default/cbsd-cloud-cloud-AlpineLinux-3.12.1-x86_64-standard.raw
crw-r—– 1 root operator 0x5f Nov 18 11:51 /dev/zvol/zroot/ROOT/default/cbsd-cloud-cloud-AlpineLinux-3.12.1-x86_64-standard.rawp1
crw-r—– 1 root operator 0x81 Nov 18 11:51 /dev/zvol/zroot/ROOT/default/cbsd-cloud-cloud-AlpineLinux-3.12.1-x86_64-standard.rawp2
crw-r—– 1 root operator 0x86 Nov 18 11:51 /dev/zvol/zroot/ROOT/default/cbsd-cloud-cloud-AlpineLinux-3.12.1-x86_64-standard.rawp3
crw-r—– 1 root operator 0x84 Nov 12 20:30 /dev/zvol/zroot/ROOT/default/cbsd-cloud-cloud-Debian-x86-10.4.0.raw
crw-r—– 1 root operator 0x82 Nov 12 20:30 /dev/zvol/zroot/ROOT/default/cbsd-cloud-cloud-Ubuntu-x86-20.04.1.raw
root@nucklehead:~ #
view raw gistfile1.txt hosted with ❤ by GitHub

Ok, that’s a little weird… There are four raw device files instead of the one the other cloud images have.

root@nucklehead:~ # file /usr/cbsd/vm/alpine2/dsk1.vhd
/usr/cbsd/vm/alpine2/dsk1.vhd: DOS/MBR boot sector; partition 1 : ID=0xee, start-CHS (0x0,0,2), end-CHS (0x3ff,255,63), startsector 1, 8388863 sectors, extended partition table (last)
root@nucklehead:~ # file /dev/zvol/zroot/ROOT/default/cbsd-cloud-cloud-AlpineLinux-3.12.1-x86_64-standard.*
/dev/zvol/zroot/ROOT/default/cbsd-cloud-cloud-AlpineLinux-3.12.1-x86_64-standard.raw: character special (0/88)
/dev/zvol/zroot/ROOT/default/cbsd-cloud-cloud-AlpineLinux-3.12.1-x86_64-standard.rawp1: character special (0/95)
/dev/zvol/zroot/ROOT/default/cbsd-cloud-cloud-AlpineLinux-3.12.1-x86_64-standard.rawp2: character special (0/129)
/dev/zvol/zroot/ROOT/default/cbsd-cloud-cloud-AlpineLinux-3.12.1-x86_64-standard.rawp3: character special (0/134)
root@nucklehead:~ # gpart show /dev/zvol/zroot/ROOT/default/cbsd-cloud-cloud-AlpineLinux-3.12.1-x86_64-standard.raw
=> 2048 8386783 zvol/zroot/ROOT/default/cbsd-cloud-cloud-AlpineLinux-3.12.1-x86_64-standard.raw GPT (4.0G) [CORRUPT]
2048 1048576 1 efi (512M)
1050624 2097152 2 linux-swap (1.0G)
3147776 5241055 3 linux-data (2.5G)
root@nucklehead:~ #
view raw gistfile1.txt hosted with ❤ by GitHub

OOOOOOOOOkay. I copied alpine1‘s entire virtual disk, which includes the partition table, EFI partition, and swap partition, in addition to the system partition. I’m not sure how CBSD’s import ended up breaking out the partitions, but either way, we only care about the system data, not EFI or swap. We’ll need to extract the root partition into its own raw file.

Great Image Bake-Off, Round 2

I need to get the contents of the linux-data partition out of the vhd file. There are a few ways to do this, but the simplest way is to create memory disk so we can access each partition through a device file. First we need to get the vhd contents into a regular file, because mdconfig cannot work from the character special device in the ZFS /dev/zvol tree.

root@nucklehead:~ # gpart show /dev/zvol/zroot/ROOT/default/alpine1/dsk1.vhd
gpart: No such geom: /dev/zvol/zroot/ROOT/default/alpine1/dsk1.vhd.
root@nucklehead:~ # file /dev/zvol/zroot/ROOT/default/alpine1/dsk1.vhd
/dev/zvol/zroot/ROOT/default/alpine1/dsk1.vhd: character special (0/145)
root@nucklehead:~ # dd bs=4m if=/dev/zvol/zroot/ROOT/default/alpine1/dsk1.vhd of=alpine1.vhd
1024+1 records in
1024+1 records out
4295098368 bytes transferred in 8.716671 secs (492745275 bytes/sec)
root@nucklehead:~ # mdconfig alpine1.vhd
md0
root@nucklehead:~ # gpart show md0
=> 2048 8386783 md0 GPT (4.0G)
2048 1048576 1 efi (512M)
1050624 2097152 2 linux-swap (1.0G)
3147776 5241055 3 linux-data (2.5G)
root@nucklehead:~ # ls -l /dev/md0*
crw-r—– 1 root operator 0x9c Nov 18 12:39 /dev/md0
crw-r—– 1 root operator 0x9d Nov 18 12:39 /dev/md0p1
crw-r—– 1 root operator 0x9e Nov 18 12:39 /dev/md0p2
crw-r—– 1 root operator 0xa6 Nov 18 12:39 /dev/md0p3
view raw gistfile1.txt hosted with ❤ by GitHub

mdconfig created /dev/md0 for the entire virtual hard disk, and because the vhd had a partition table, the md driver (I assume) also created device files for each partition. I just want the third partition, so I need to read from /dev/md0p3.

root@nucklehead:~ # dd bs=4m if=/dev/md0p3 of=linux-data
639+1 records in
639+1 records out
2683420160 bytes transferred in 7.071850 secs (379450963 bytes/sec)
root@nucklehead:~ # file linux-data
linux-data: Linux rev 1.0 ext4 filesystem data, UUID=21091f75-b77d-4eb0-a8a8-b4d15fbe57fc (extents) (64bit) (large files) (huge files)
root@nucklehead:~ #
view raw gistfile1.txt hosted with ❤ by GitHub

Cloud Seeding, Round 2

We now have alpine1‘s root file system in raw format. I copy that file to cbsd-cloud-cloud-AlpineLinux-3.12.1-x86_64-standard.raw and try to create my VM again.

root@nucklehead:~ # cbsd bstart alpine2
cloud-init: enabled
vm_iso_path: cloud-AlpineLinux-3.12.1-x86_64-standard
cloud init image initialization..
Clone cloud image into first/system vm disk (zfs clone method)
/sbin/zfs get -Ht snapshot userrefs zroot/ROOT/default/cbsd-cloud-cloud-AlpineLinux-3.12.1-x86_64-standard.raw@boot-alpine2
Eject cloud source: media mode=detach name=cloud-AlpineLinux-3.12.1-x86_64-standard path=/usr/cbsd/src/iso/cbsd-cloud-cloud-AlpineLinux-3.12.1-x86_64-standard.raw type=iso jname=alpine2
UPDATE media SET jname='-' WHERE jname="alpine2" AND name="cloud-AlpineLinux-3.12.1-x86_64-standard" AND path="/usr/cbsd/src/iso/cbsd-cloud-cloud-AlpineLinux-3.12.1-x86_64-standard.raw"
vm_iso_path: changed
Detach to: alpine2
All CD/ISO ejected: alpine2
VRDP is enabled. VNC bind/port: 192.168.0.11:5904
For attach VM console, use: vncviewer 192.168.0.11:5904
Resolution: 1024×768.
em0
bhyve renice: 1
Waiting for PID.
PID: 6761
root@nucklehead:~ #
view raw gistfile1.txt hosted with ❤ by GitHub

It started without error! CBSD once again automatically handles importing the raw file I just dropped into the src/iso directory. Now we need to see if it actually booted via VNC. I opened the VNC client and… it timed out trying to connect. I go back to the terminal window with the ssh session to my NUC and… that’s hanging. So, I connect my HDMI capture dongle to the NUC, and see FreeBSD had panicked.

Screenshot of FreeBSD console showing a kernel panic

But when the NUC rebooted, it brought up the alpine2 VM without any obvious issues, and I could connect to the console. I don’t know why.

root@nucklehead:~ # cbsd bls
JNAME JID VM_RAM VM_CURMEM VM_CPUS PCPU VM_OS_TYPE IP4_ADDR STATUS VNC
alpine2 11850 1024 0 1 0 linux 10.0.0.2 On 192.168.0.11:5904
arch1 0 1024 0 1 0 linux DHCP Off 192.168.0.10:5901
arch2 0 1024 0 1 0 linux DHCP Off 192.168.0.10:5902
debian1 7948 1024 0 1 0 linux 10.0.0.1 On 192.168.0.11:5903
freebsd1 4537 1024 0 1 0 freebsd DHCP On 192.168.0.11:5900
root@nucklehead:~ #
view raw gistfile1.txt hosted with ❤ by GitHub
Screenshot of VNC client showing the new VM successfully booted but did not get reconfigured

However, the new alpine2 VM still thinks it’s alpine1 and did not apply any of the other cloud-init configurations. And even if cloud-init had run successfully, I couldn’t really count it as much of a win if causing the host to panic is part of the deal.

But I’ll worry about the panic problem later. First, I want to diagnose the cloud-init issue. If I run openrc --service cloud-init start in my VM, cloud-init runs and successfully reconfigures tthe hostname to alpine2. It also rewrites /etc/network/interfaces with the static IP address I had assigned, and… ok, it did not use my password hash for root. Meh, I’ll still consider that a win.

Screenshot of console for alpine2 VM after running cloud-init manually
Screenshot showing rc-status output with cloud-init set to manual

Ok, I’ll need to go back to my source VM and run rc-update add cloud-init default to run cloud-init at boot, then make a new raw image. But I should also figure out what may have caused the FreeBSD kernel panic.

After I rebooted the NUC, I saw similar GPT-related messages for the other cloud raw ZFS volumes in the dmesg log.

GEOM: zvol/zroot/ROOT/default/cbsd-cloud-cloud-AlpineLinux-3.12.1-x86_64-standard.raw: corrupt or invalid GPT detected.
GEOM: zvol/zroot/ROOT/default/cbsd-cloud-cloud-AlpineLinux-3.12.1-x86_64-standard.raw: GPT rejected — may not be recoverable.
view raw gistfile1.txt hosted with ❤ by GitHub

Ok, fine, I should check to see what how the disks in the CBSD-distributed raw cloud images are laid out.

root@nucklehead:~ # dd bs=4m if=/dev/zvol/zroot/ROOT/default/cbsd-cloud-cloud-Debian-x86-10.4.0.raw of=debian.raw
1024+1 records in
1024+1 records out
4295098368 bytes transferred in 10.888479 secs (394462666 bytes/sec)
root@nucklehead:~ # file debian.raw
debian.raw: DOS/MBR boot sector; partition 1 : ID=0xee, start-CHS (0x0,0,1), end-CHS (0x3ff,254,63), startsector 1, 8388607 sectors, extended partition table (last)
root@nucklehead:~ # mdconfig debian.raw
md0
root@nucklehead:~ # ls -l /dev/md0*
crw-r—– 1 root operator 0x91 Nov 18 14:05 /dev/md0
crw-r—– 1 root operator 0x92 Nov 18 14:05 /dev/md0p1
crw-r—– 1 root operator 0x93 Nov 18 14:05 /dev/md0p2
crw-r—– 1 root operator 0x94 Nov 18 14:05 /dev/md0p3
crw-r—– 1 root operator 0x95 Nov 18 14:05 /dev/md0p4
root@nucklehead:~ # gpart show /dev/md0
=> 34 8388541 md0 GPT (4.0G) [CORRUPT]
34 2014 – free – (1.0M)
2048 1048576 1 efi (512M)
1050624 702464 2 linux-data (343M)
1753088 124928 3 linux-swap (61M)
1878016 6508544 4 linux-data (3.1G)
8386560 2015 – free – (1.0M)
root@nucklehead:~ # dd bs=4m if=/dev/zvol/zroot/ROOT/default/cbsd-cloud-cloud-Ubuntu-x86-20.04.1.raw of=ubuntu.raw
563+1 records in
563+1 records out
2361524224 bytes transferred in 8.146899 secs (289867855 bytes/sec)
root@nucklehead:~ # file ubuntu.raw
ubuntu.raw: DOS/MBR boot sector, extended partition table (last)
root@nucklehead:~ # mdconfig ubuntu.raw
md1
root@nucklehead:~ # ls -l /dev/md1*
crw-r—– 1 root operator 0xae Nov 18 14:08 /dev/md1
crw-r—– 1 root operator 0xaf Nov 18 14:08 /dev/md1p1
crw-r—– 1 root operator 0xb0 Nov 18 14:08 /dev/md1p14
crw-r—– 1 root operator 0xb1 Nov 18 14:08 /dev/md1p15
root@nucklehead:~ # gpart show /dev/md1
=> 34 4612029 md1 GPT (2.2G) [CORRUPT]
34 2014 – free – (1.0M)
2048 8192 14 bios-boot (4.0M)
10240 217088 15 efi (106M)
227328 4384735 1 linux-data (2.1G)
root@nucklehead:~ #
view raw gistfile1.txt hosted with ❤ by GitHub

Ok, the CBSD-distributed raw cloud files are in fact full disk images, including partition table and other partitions, with no consistency between the number, order, or even types of partitions. So I apparently was overthinking when I decided I needed to extract the root data partition.

I update the alpine1 image in a non-cloud VM to set cloud-init to run at boot, and then it’s time to try again to create a raw image that won’t cause a kernel panic.

Great Image Bake-Off Finals

I’m not sure what else to try to get a safe image or even if I hadn’t missed something the first time I tried to copy the source ZFS volume contents to a raw file. So, I figure I may as well try that again with my updated raw image. I remove the artifacts of the previous attempt and try again.

root@nucklehead:/usr/cbsd/src/iso # rm cbsd-cloud-cloud-AlpineLinux-3.12.1-x86_64-standard.raw
root@nucklehead:/usr/cbsd/src/iso # zfs destroy zroot/ROOT/default/cbsd-cloud-cloud-AlpineLinux-3.12.1-x86_64-standard.raw
root@nucklehead:~ # dd bs=4m if=/dev/zvol/zroot/ROOT/default/alpine2/dsk1.vhd of=alpine-update.vhd
1024+1 records in
1024+1 records out
4295098368 bytes transferred in 8.278776 secs (518808354 bytes/sec)
root@nucklehead:~ # cp alpine-update.vhd /usr/cbsd/src/iso/cbsd-cloud-cloud-AlpineLinux-3.12.1-x86_64-standard.raw
view raw gistfile1.txt hosted with ❤ by GitHub

Then I run cbsd bconstruct-tui again with the same settings (except for details like hostname and IP addresses) and start the VM.

root@nucklehead:~ # cbsd bstart alpine3
cloud-init: enabled
vm_iso_path: cloud-AlpineLinux-3.12.1-x86_64-standard
Original size: 4g, real referenced size/data: 625m
Converting /usr/cbsd/src/iso/cbsd-cloud-cloud-AlpineLinux-3.12.1-x86_64-standard.raw -> /dev/zvol/zroot/ROOT/default/cbsd-cloud-cloud-AlpineLinux-3.12.1-x86_64-standard.raw: 4g…
WIP: [0%…10%…21%…29%…43%…49%…58%…71%…76%…92%…99%…99%…100%]
1024+1 records in
1024+1 records out
4295098368 bytes transferred in 15.947733 secs (269323442 bytes/sec)
cloud init image initialization..
Clone cloud image into first/system vm disk (zfs clone method)
/sbin/zfs get -Ht snapshot userrefs zroot/ROOT/default/cbsd-cloud-cloud-AlpineLinux-3.12.1-x86_64-standard.raw@boot-alpine3
Eject cloud source: media mode=detach name=cloud-AlpineLinux-3.12.1-x86_64-standard path=/usr/cbsd/src/iso/cbsd-cloud-cloud-AlpineLinux-3.12.1-x86_64-standard.raw type=iso jname=alpine3
UPDATE media SET jname='-' WHERE jname="alpine3" AND name="cloud-AlpineLinux-3.12.1-x86_64-standard" AND path="/usr/cbsd/src/iso/cbsd-cloud-cloud-AlpineLinux-3.12.1-x86_64-standard.raw"
vm_iso_path: changed
Detach to: alpine3
All CD/ISO ejected: alpine3
VRDP is enabled. VNC bind/port: 192.168.0.11:5905
For attach VM console, use: vncviewer 192.168.0.11:5905
Resolution: 1024×768.
em0
bhyve renice: 1
Waiting for PID.
PID: 43810
root@nucklehead:~ #
view raw gistfile1.txt hosted with ❤ by GitHub
Screenshot of VM console showing a successful boot and cloud-init run

It starts, it boots, cloud-init runs, and as a bonus, no FreeBSD kernel panic! (No, my password hash for the root user did not get applied this time, either, and I think /etc/network/interfaces gets written with the correct information because I used the Centos template, but Alpine does not use the same network configuration method. But do you really want this post to get longer so you can watch me debug that?)


That took a lot of trial and error, even after I omitted a few dead-ends and lots and lots of web searches.

In the next part, I will actually live up to the original premise of this series and start doing real Kubernetty things that involve FreeBSD.

Sources / References

One thought on “Adventures in Freebernetes: Cloudy with a Chance of Rain

Add yours

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Blog at WordPress.com.

Up ↑

%d bloggers like this: