In the previous post in this series, I created a custom VM configuration so I could create Alpine Linux VMs in CBSD. That experiment went well. Next up was creating a cloud image for Alpine to allow completely automated configuration of the target VM. However, that plan hit some roadblocks and requires doing a deep dive into a new rabbit hole, documented in this post.
Great Image Bake-Off
I’m going to try to use my existing Alpine VM to install cloud-init from the edge branch (I have no idea whether it’s compatible, but I guess we will find out). The Alpine package tool, apk, doesn’t seem to support specifying packages for a different branch than the one installed, so I uncomment the edge repositories in the apk configuration.
I can’t easily test whether it will work with rebooting and then having to clean up the markers cloud-init leaves to maintain state across reboots so it doesn’t bootstrap more than once. I will just have to test in the new VM. (I also needed to run rc-update add cloud-init default in the VM before I shut it down, but more on that later.)
I can’t find any specific docs in CBSD on how they generate their cloud images, or even what the specific format is, although this doc implies that it’s a ZFS volume.
So, I look at the raw images in /usr/cbsd/src/iso.
I left the raw file in ~cbsd/src/iso to see if CBSD would import it into ZFS automatically when I run cbsd bconstruct-tui.
Cloud Seeding
Again, I have no idea if the above steps actually created a bootable image which CBSD will accept, but before I can try, I have to create the cloud VM profile’s configuration.
I have no idea what variables or custom parameters Alpine uses or supports for cloud-init and since I’ve been combing through CBSD and other docs all day, I would just as soon skip digging into cloud-init. I copied over the CBSD configuration file for a cloud Ubuntu VM profile in ~cbsd/etc/defaults and edited that.
Also, I could not find it documented, but the cloud-init template files used to populate VMs are stored in /usr/local/cbsd/modules/bsdconf.d/cloud-tpl. The Debian cloud VM I made earlier used the centos7 templates, so I will try those with Alpine.
Here’s the VM profile I create as /usr/cbsd/etc/defaults/vm-linux-cloud-AlpineLinux-3.12.1-x86_64-standard.conf:
# don't remove this line:
vm_profile="AlpineLinux-3.12.1-x86_64-standard"
vm_os_type="linux"
# this is one-string additional info strings in dialogue menu
4295229440 bytes transferred in 26.628993 secs (161298979 bytes/sec)
Eject cloud source: media mode=detach name=cloud-AlpineLinux-3.12.1-x86_64-standard path=/usr/cbsd/src/iso/cbsd-cloud-cloud-AlpineLinux-3.12.1-x86_64-standard.raw type=iso jname=alpine2
UPDATE media SET jname='-' WHERE jname="alpine2" AND name="cloud-AlpineLinux-3.12.1-x86_64-standard" AND path="/usr/cbsd/src/iso/cbsd-cloud-cloud-AlpineLinux-3.12.1-x86_64-standard.raw"
vm_iso_path: changed
Detach to: alpine2
All CD/ISO ejected: alpine2
VRDP is enabled. VNC bind/port: 192.168.0.11:5904
For attach VM console, use: vncviewer 192.168.0.11:5904
OOOOOOOOOkay. I copied alpine1‘s entire virtual disk, which includes the partition table, EFI partition, and swap partition, in addition to the system partition. I’m not sure how CBSD’s import ended up breaking out the partitions, but either way, we only care about the system data, not EFI or swap. We’ll need to extract the root partition into its own raw file.
Great Image Bake-Off, Round 2
I need to get the contents of the linux-data partition out of the vhd file. There are a few ways to do this, but the simplest way is to create memory disk so we can access each partition through a device file. First we need to get the vhd contents into a regular file, because mdconfig cannot work from the character special device in the ZFS /dev/zvol tree.
root@nucklehead:~ # gpart show /dev/zvol/zroot/ROOT/default/alpine1/dsk1.vhd
gpart: No such geom: /dev/zvol/zroot/ROOT/default/alpine1/dsk1.vhd.
mdconfig created /dev/md0 for the entire virtual hard disk, and because the vhd had a partition table, the md driver (I assume) also created device files for each partition. I just want the third partition, so I need to read from /dev/md0p3.
We now have alpine1‘s root file system in raw format. I copy that file to cbsd-cloud-cloud-AlpineLinux-3.12.1-x86_64-standard.raw and try to create my VM again.
Clone cloud image into first/system vm disk (zfs clone method)
/sbin/zfs get -Ht snapshot userrefs zroot/ROOT/default/cbsd-cloud-cloud-AlpineLinux-3.12.1-x86_64-standard.raw@boot-alpine2
Eject cloud source: media mode=detach name=cloud-AlpineLinux-3.12.1-x86_64-standard path=/usr/cbsd/src/iso/cbsd-cloud-cloud-AlpineLinux-3.12.1-x86_64-standard.raw type=iso jname=alpine2
UPDATE media SET jname='-' WHERE jname="alpine2" AND name="cloud-AlpineLinux-3.12.1-x86_64-standard" AND path="/usr/cbsd/src/iso/cbsd-cloud-cloud-AlpineLinux-3.12.1-x86_64-standard.raw"
vm_iso_path: changed
Detach to: alpine2
All CD/ISO ejected: alpine2
VRDP is enabled. VNC bind/port: 192.168.0.11:5904
For attach VM console, use: vncviewer 192.168.0.11:5904
It started without error! CBSD once again automatically handles importing the raw file I just dropped into the src/iso directory. Now we need to see if it actually booted via VNC. I opened the VNC client and… it timed out trying to connect. I go back to the terminal window with the ssh session to my NUC and… that’s hanging. So, I connect my HDMI capture dongle to the NUC, and see FreeBSD had panicked.
But when the NUC rebooted, it brought up the alpine2 VM without any obvious issues, and I could connect to the console. I don’t know why.
root@nucklehead:~ # cbsd bls
JNAME JID VM_RAM VM_CURMEM VM_CPUS PCPU VM_OS_TYPE IP4_ADDR STATUS VNC
alpine2 11850 1024 0 1 0 linux 10.0.0.2 On 192.168.0.11:5904
arch1 0 1024 0 1 0 linux DHCP Off 192.168.0.10:5901
arch2 0 1024 0 1 0 linux DHCP Off 192.168.0.10:5902
debian1 7948 1024 0 1 0 linux 10.0.0.1 On 192.168.0.11:5903
freebsd1 4537 1024 0 1 0 freebsd DHCP On 192.168.0.11:5900
However, the new alpine2 VM still thinks it’s alpine1 and did not apply any of the other cloud-init configurations. And even if cloud-init had run successfully, I couldn’t really count it as much of a win if causing the host to panic is part of the deal.
But I’ll worry about the panic problem later. First, I want to diagnose the cloud-init issue. If I run openrc --service cloud-init start in my VM, cloud-init runs and successfully reconfigures tthe hostname to alpine2. It also rewrites /etc/network/interfaces with the static IP address I had assigned, and… ok, it did not use my password hash for root. Meh, I’ll still consider that a win.
Ok, I’ll need to go back to my source VM and run rc-update add cloud-init default to run cloud-init at boot, then make a new raw image. But I should also figure out what may have caused the FreeBSD kernel panic.
After I rebooted the NUC, I saw similar GPT-related messages for the other cloud raw ZFS volumes in the dmesg log.
GEOM: zvol/zroot/ROOT/default/cbsd-cloud-cloud-AlpineLinux-3.12.1-x86_64-standard.raw: corrupt or invalid GPT detected.
GEOM: zvol/zroot/ROOT/default/cbsd-cloud-cloud-AlpineLinux-3.12.1-x86_64-standard.raw: GPT rejected — may not be recoverable.
Ok, the CBSD-distributed raw cloud files are in fact full disk images, including partition table and other partitions, with no consistency between the number, order, or even types of partitions. So I apparently was overthinking when I decided I needed to extract the root data partition.
I update the alpine1 image in a non-cloud VM to set cloud-init to run at boot, and then it’s time to try again to create a raw image that won’t cause a kernel panic.
Great Image Bake-Off Finals
I’m not sure what else to try to get a safe image or even if I hadn’t missed something the first time I tried to copy the source ZFS volume contents to a raw file. So, I figure I may as well try that again with my updated raw image. I remove the artifacts of the previous attempt and try again.
4295098368 bytes transferred in 15.947733 secs (269323442 bytes/sec)
cloud init image initialization..
Clone cloud image into first/system vm disk (zfs clone method)
/sbin/zfs get -Ht snapshot userrefs zroot/ROOT/default/cbsd-cloud-cloud-AlpineLinux-3.12.1-x86_64-standard.raw@boot-alpine3
Eject cloud source: media mode=detach name=cloud-AlpineLinux-3.12.1-x86_64-standard path=/usr/cbsd/src/iso/cbsd-cloud-cloud-AlpineLinux-3.12.1-x86_64-standard.raw type=iso jname=alpine3
UPDATE media SET jname='-' WHERE jname="alpine3" AND name="cloud-AlpineLinux-3.12.1-x86_64-standard" AND path="/usr/cbsd/src/iso/cbsd-cloud-cloud-AlpineLinux-3.12.1-x86_64-standard.raw"
vm_iso_path: changed
Detach to: alpine3
All CD/ISO ejected: alpine3
VRDP is enabled. VNC bind/port: 192.168.0.11:5905
For attach VM console, use: vncviewer 192.168.0.11:5905
It starts, it boots, cloud-init runs, and as a bonus, no FreeBSD kernel panic! (No, my password hash for the root user did not get applied this time, either, and I think /etc/network/interfaces gets written with the correct information because I used the Centos template, but Alpine does not use the same network configuration method. But do you really want this post to get longer so you can watch me debug that?)
That took a lot of trial and error, even after I omitted a few dead-ends and lots and lots of web searches.
In the next part, I will actually live up to the original premise of this series and start doing real Kubernetty things that involve FreeBSD.