Adventures in Freebernetes: Tripping to the Finish Line

Part 12 of experiments in FreeBSD and Kubernetes: Completing and testing the Kubernetes Cluster

See all posts in this series

Table of Contents

Recap

In the last post, I bootstrapped my cluster’s control plane, both the etcd cluster and Kubernetes components, following the tutorial in Kubernetes the Hard Way. In this post, I will bootstrap the worker nodes, but first, I need to fix a few issues.

A few details for reference:

  • My hypervisor is named nucklehead (it’s an Intel NUC) and is running FreeBSD 13.0-CURRENT
  • My home network, including the NUC, is in the 192.168.0.0/16 space
  • The Kubernetes cluster will exist in the 10.0.0.0/8 block, which exists solely on my FreeBSD host.
    • The controllers and workers are in the 10.10.0.0/16 block.
    • The internal service network is in the 10.20.0.0./24 (changed from 10.50.0.0/24) block.
    • The cluster pod network is in the 10.100.0.0/16 block.
  • The cluster VMs are all in the something.local domain.
  • The kubernetes.something.local endpoint for kube-apiserver has the virtual IP address 10.50.0.1, which gets round-robin load-balanced across all three controllers by ipfw on the hypervisor.
  • Yes, I am just hanging out in a root shell on the hypervisor.

Fixes

I Need This in a Larger Size

First off, the disks on the controllers filled up. The bulk of the usage came from etcd data in /var/lib/etcd. As I noted, I had only allocated 10Gb to each disk, because I don’t have an infinite amount of storage on the NUC. However, I still have enough space on the hypervisor’s disk to add space to each controller, especially because their virtual disks are ZFS clones of a snapshot base image. ZFS clones use copy-on-write (COW); they only consume disk space when a change is made on the cloned volume.

However, as I also noted, using ZFS volumes and CBSD makes it pretty easy to increase the size of the guest VM’s virtual disk.

  • Stop the VM
  • Use the cbsd bhyve-dsk command to resize the VM’s virtual disk
  • Run gpart against the volume’s device to increase the size of the filesystem. Note this only works safely if it’s the last partition on the disk. We’re doing this from the hypervisor so we don’t have to re-write the partition table for the live VM, which would either involve booting a rescue CD (which I’m too lazy to figure out) or make the change from the live VM, which is a bit scary when modifying a mounted partition, but can be done.
  • Restart the VM
  • Log in to the VM and run resize2fs on the resized partition.
root@nucklehead:~ # cbsd bstop controller-0
Send SIGTERM to controller-0. Soft timeout is 30 sec. 0 seconds left […………………………]
bstop done in 5 seconds
root@nucklehead:~ # cbsd bhyve-dsk controller-0 mode=list
JNAME DSK_CONTROLLER DSK_PATH DSK_SIZE DSK_SECTORSIZE BOOTABLE
ubuntu-base virtio-blk dsk1.vhd 10g 512/4096 true
controller-0 virtio-blk dsk1.vhd 10g 512/4096 true
controller-1 virtio-blk dsk1.vhd 10g 512/4096 true
controller-2 virtio-blk dsk1.vhd 30g 512/4096 true
worker-0 virtio-blk dsk1.vhd 10g 512/4096 true
worker-1 virtio-blk dsk1.vhd 10g 512/4096 true
worker-2 virtio-blk dsk1.vhd 10g 512/4096 true
root@nucklehead:~ # cbsd bhyve-dsk mode=modify jname=controller-0 dsk_controller=virtio-blk dsk_path=dsk1.vhd dsk_size=30g
resize zroot/ROOT/default/controller-0/dsk1.vhd up to 32212287488
modify_dsk_size: volume size increased by: 20g
dsk_size: changed
root@nucklehead:~ # zfs get volsize zroot/ROOT/default/controller-0/dsk1.vhd
NAME PROPERTY VALUE SOURCE
zroot/ROOT/default/controller-0/dsk1.vhd volsize 30.0G local
root@nucklehead:~ # gpart list | grep dsk1.vhd
Geom name: zvol/zroot/ROOT/default/controller-0/dsk1.vhd
1. Name: zvol/zroot/ROOT/default/controller-0/dsk1.vhdp1
2. Name: zvol/zroot/ROOT/default/controller-0/dsk1.vhdp2
1. Name: zvol/zroot/ROOT/default/controller-0/dsk1.vhd
root@nucklehead:~ # gpart show zvol/zroot/ROOT/default/controller-0/dsk1.vhd
=> 40 62914544 zvol/zroot/ROOT/default/controller-0/dsk1.vhd GPT (30G)
40 2008 – free – (1.0M)
2048 1048576 1 efi (512M)
1050624 19921119 2 linux-data (9.5G)
20971743 41942841 – free – (20G)
root@nucklehead:~ # gpart resize -i 2 zvol/zroot/ROOT/default/controller-0/dsk1.vhd
zvol/zroot/ROOT/default/controller-0/dsk1.vhdp2 resized
root@nucklehead:~ # gpart show zvol/zroot/ROOT/default/controller-0/dsk1.vhd
=> 40 62914544 zvol/zroot/ROOT/default/controller-0/dsk1.vhd GPT (30G)
40 2008 – free – (1.0M)
2048 1048576 1 efi (512M)
1050624 61863936 2 linux-data (29G)
62914560 24 – free – (12K)
root@nucklehead:~ # cbsd bstart controller-0
cloud-init: enabled
VRDP is enabled. VNC bind/port: 192.168.0.9:5901
For attach VM console, use: vncviewer 192.168.0.9:5901
Resolution: 1024×768.
em0
bhyve renice: 1
Execute master script: cloud_init_set_netname.sh
:: /usr/cbsd/jails-system/controller-0/master_prestart.d/cloud_init_set_netname.sh
Waiting for PID.
PID: 78041
CBSD setup: bhyve ipfw counters num: 99/102
root@nucklehead:~ # ssh -i ~cbsd/.ssh/id_rsa ubuntu@controller-0
Welcome to Ubuntu 20.04.1 LTS (GNU/Linux 5.4.0-54-generic x86_64)
[…]
ubuntu@controller-0:~$ df /
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/vda2 9738656 9722272 0 100% /
ubuntu@controller-0:~$ sudo resize2fs /dev/vda2
resize2fs 1.45.5 (07-Jan-2020)
Filesystem at /dev/vda2 is mounted on /; on-line resizing required
old_desc_blocks = 2, new_desc_blocks = 4
The filesystem on /dev/vda2 is now 7732992 (4k) blocks long.
ubuntu@controller-0:~$ df /
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/vda2 30380996 9730476 19297372 34% /
ubuntu@controller-0:~$
view raw gistfile1.txt hosted with ❤ by GitHub

I increase the virtual disk on all three controllers and everything is up and running again, except etcd on one controller won’t restart because of a corrupted data file. I read more docs and figure out I need to stop etcd, remove the bad member from the cluster, delete the contents of /var/lib/etcd, then re-add the member to the cluster. etcdctl will output several lines that need to be added to the errant member’s /etc/etcd/etcd.conf before restarting etcd on that host. (I forgot to grab the shell output.) etcd came up, joined the existing cluster, and started streaming the data snapshot from an existing member.

Except kube-apiserver keeps writing its own TLS certificate in /var/run/kubernetes and it keeps using only the primary IP address on the primary interface plus the gateway IP address in the SAN (Subject Alternative Name) list, while connections to etcd and other K8s controller services go over the loopback interface.

Space

I check the documentation, and as long as kube-apiserver has a certificate and key specified by the --tls-cert-file and --tls-private-key-file options passed at start time. They’re both present and correctly set in /etc/systemd/system/kube-apiserver.service, so I am comfused. I happen to be running ps on one of the controllers while checking the etcd cluster and the command’s arguments look odd. As in, they end with a trailing \. I look at /etc/systemd/system/kube-apiserver.service and sure enough, there’s a trailing space after the end-of-line continuation \ of the last argument passed to kube-apiserver at start. The ‘\ ‘ would have been interpolated not as an end-of-line continuation, but as a string.

systemd supports a list of start commands, so it would have started kube-apiserver without error, throwing an error when it couldn’t run the trailing options as an actual command. But since the service process itself started without issue, I didn’t notice. Removing the trailing space from the file fixed the issue.

ubuntu@controller-1:~$ ps -efl | grep kube-apiserver
4 S root 1797 1 12 80 0 – 141067 – 03:00 ? 00:03:27 /usr/local/bin/kube-apiserver –advertise-address=10.10.0.11 –allow-privileged=true –apiserver-count=3 –audit-log-maxage=30 –audit-log-maxbackup=3 –audit-log-maxsize=100 –audit-log-path=/var/log/audit.log –authorization-mode=Node,RBAC –bind-address=0.0.0.0 –client-ca-file=/var/lib/kubernetes/ca.pem –enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota –etcd-cafile=/var/lib/kubernetes/ca.pem –etcd-certfile=/var/lib/kubernetes/kubernetes.pem –etcd-keyfile=/var/lib/kubernetes/kubernetes-key.pem –etcd-servers=https://10.10.0.10:2379,https://10.10.0.11:2379,https://10.10.0.12:2379 \
0 S ubuntu 4479 4403 0 80 0 – 1608 pipe_w 03:28 pts/0 00:00:00 grep –color=auto kube-apiserver
ubuntu@controller-1:~$ sudo vi /etc/systemd/system/kube-apiserver.service
ubuntu@controller-1:~$ sudo systemctl daemon-reload
ubuntu@controller-1:~$ sudo systemctl restart kube-apiserver
ubuntu@controller-1:~$ ps -efl | grep kube-apiserver
4 S root 4568 1 73 80 0 – 141467 – 03:32 ? 00:00:07 /usr/local/bin/kube-apiserver –advertise-address=10.10.0.11 –allow-privileged=true –apiserver-count=3 –audit-log-maxage=30 –audit-log-maxbackup=3 –audit-log-maxsize=100 –audit-log-path=/var/log/audit.log –authorization-mode=Node,RBAC –bind-address=0.0.0.0 –client-ca-file=/var/lib/kubernetes/ca.pem –enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota –etcd-cafile=/var/lib/kubernetes/ca.pem –etcd-certfile=/var/lib/kubernetes/kubernetes.pem –etcd-keyfile=/var/lib/kubernetes/kubernetes-key.pem –etcd-servers=https://10.10.0.10:2379,https://10.10.0.11:2379,https://10.10.0.12:2379 –event-ttl=1h –encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml –kubelet-certificate-authority=/var/lib/kubernetes/ca.pem –kubelet-client-certificate=/var/lib/kubernetes/kubernetes.pem –kubelet-client-key=/var/lib/kubernetes/kubernetes-key.pem –kubelet-https=true –runtime-config=api/all=true –service-account-key-file=/var/lib/kubernetes/service-account.pem –service-cluster-ip-range=10.50.0.0/24 –service-node-port-range=30000-32767 –tls-cert-file=/var/lib/kubernetes/kubernetes.pem –tls-private-key-file=/var/lib/kubernetes/kubernetes-key.pem –v=2
0 S ubuntu 4592 4403 0 80 0 – 1608 pipe_w 03:33 pts/0 00:00:00 grep –color=auto kube-apiserver
ubuntu@controller-1:~$
view raw gistfile1.txt hosted with ❤ by GitHub

Now that these issues are taken care, the control plane hosts should be stable and reliable, at least for more than 24 hours.

Bootstrapping the Kubernetes Worker Nodes

Most of this section is straightforward, other than updating IP addresses and ranges.

When I created the worker VMs, I added, through some hackery of CBSD’s cloud-init data handling, a pod_cidr field to the instance metadata to configure each worker with its unique slice of the pod network. cloud-init puts the metadata in /run/cloud-init/instance-data.json. We need this value now to configure the CNI (Container Network Interface) plugin.

Screen shot of tmux showing the pod cidr value of worker nodes
I still haven’t found a better terminal type

After finishing the configuration, everything looks as expected.

root@nucklehead:~ # ssh -i ~cbsd/.ssh/id_rsa ubuntu@controller-0 "kubectl get nodes –kubeconfig admin.kubeconfig"
NAME STATUS ROLES AGE VERSION
worker-0 Ready <none> 3m38s v1.18.6
worker-1 Ready <none> 3m38s v1.18.6
worker-2 Ready <none> 3m38s v1.18.6
root@nucklehead:~ #
view raw gistfile1.txt hosted with ❤ by GitHub

Configuring kubectl for Remote Access

This section only requires setting the KUBERNETES_PUBLIC_ADDRESS variable to my VIP for the kube API endpoint.

Provisioning Pod Network Routes

This part gets a little more complicated. In the tutorial, it relies on the Google Compute Engine’s inter-VM networking and routing abilities. However, since all the inter-VM networking for this cluster goes through the FreeBSD bridge1 interface where ipfw is already doing all kinds of heavy-lifting, I can also use ipfw to handle the routing for the pod network.

root@nucklehead:~ # for i in 0 1 2; do
ipfw add 25${i} fwd 10.10.0.2${i} ip from any to 10.100.${i}.0/24 keep-state
done
00250 fwd 10.10.0.20 ip from any to 10.100.0.0/24 keep-state :default
00251 fwd 10.10.0.21 ip from any to 10.100.1.0/24 keep-state :default
00252 fwd 10.10.0.22 ip from any to 10.100.2.0/24 keep-state :default
root@nucklehead:~ #
view raw gistfile1.txt hosted with ❤ by GitHub

Deploying the DNS Cluster Add-on

In this section, since my IP blocks differ, I had to download and edit the YAML for the DNS deployment.

At this point I end up massaging my cluster network a bit, changing the cluster network’s netmask from /8 to /16 and adding the alias 10.10.0.1 to the bridge interface on the FreeBSD hypervisor to put a gateway in the 10.10.0.0/16 VM network as it was no longer in the same CIDR block as the old gateway, 10.0.0.1.

napalm@nucklehead:~ $ kubectl exec -it busybox — nslookup kubernetes
Server: 10.50.0.10
Address 1: 10.50.0.10 kube-dns.kube-system.svc.cluster.local
Name: kubernetes
Address 1: 10.0.0.1 kubernetes.default.svc.cluster.local
napalm@nucklehead:~ $
view raw gistfile1.txt hosted with ❤ by GitHub

Oddly, though, my cluster ended up assigning 10.0.0.1 to the cluster’s internal Kubernetes API proxy Service, even though that is outside the configured 10.50.0.0/24 Service network. I wonder if that happened because 10.50.0.1 was already a routable IP address within the cluster, as I had configured it as the external Kubernetes API endpoint and as a virtual IP on the controllers?

Either way, I need to move the “public” API endpoint address out of the services block. It’s arguably easier to change the service CIDR block for the existing cluster as it’s only set as arguments for kube-apiserver and kube-controller-manager. I will also need to update the kubernetes.pem used by kube-apiserver to add the new internal service IP address to the list of accepted servernames.

napalm@nucklehead:~ $ kubectl get service -A
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 2d12h
napalm@nucklehead:~ $ kubectl delete svc/kubernetes
service "kubernetes" deleted
napalm@nucklehead:~ $ kubectl get service -A
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.50.0.1 <none> 443/TCP 4s
napalm@nucklehead:~ $
[ update systemd unit files and restart kube-apiserver and kube-controller-manager on the controllers ]
napalm@nucklehead:~ $ kubectl delete svc/kubernetes
service "kubernetes" deleted
napalm@nucklehead:~ $ kubectl get service -A
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.20.0.1 <none> 443/TCP 1s
napalm@nucklehead:~ $
view raw gistfile1.txt hosted with ❤ by GitHub

I was so focused on getting the updated certificate files copied around and restarting dependent services that I initially forgot to update the --service-cluster-ip-range option for kube-apiserver and kube-controller-manager. I’m not completely sure why the service was initially given the cluster IP 10.0.0.1, which I could have tested by deleting the kubernetes Service before making the changes, but I didn’t think of it until afterward. Once everything was updated, the service was recreated with the 10.20.0.1 address, as expected.

napalm@nucklehead:~ $ kubectl exec -ti busybox — nslookup kubernetes
Server: 10.20.0.10
Address 1: 10.20.0.10 kube-dns.kube-system.svc.cluster.local
Name: kubernetes
Address 1: 10.20.0.1 kubernetes.default.svc.cluster.local
napalm@nucklehead:~ $
view raw gistfile1.txt hosted with ❤ by GitHub

Smoke Test

Now it’s time to smoke test the cluster. The data encryption and deployment tests work fine without modification.

root@nucklehead:~ # kubectl create secret generic kubernetes-the-hard-way \
–from-literal="mykey=mydata"
secret/kubernetes-the-hard-way created
root@nucklehead:~ # ssh -i ~cbsd/.ssh/id_rsa ubuntu@controller-0 "sudo ETCDCTL_API=3 etcdctl get \
–endpoints=https://127.0.0.1:2379 \
–cacert=/etc/etcd/ca.pem \
–cert=/etc/etcd/kubernetes.pem \
–key=/etc/etcd/kubernetes-key.pem\
/registry/secrets/default/kubernetes-the-hard-way | hexdump -C"
00000000 2f 72 65 67 69 73 74 72 79 2f 73 65 63 72 65 74 |/registry/secret|
00000010 73 2f 64 65 66 61 75 6c 74 2f 6b 75 62 65 72 6e |s/default/kubern|
00000020 65 74 65 73 2d 74 68 65 2d 68 61 72 64 2d 77 61 |etes-the-hard-wa|
00000030 79 0a 6b 38 73 3a 65 6e 63 3a 61 65 73 63 62 63 |y.k8s:enc:aescbc|
00000040 3a 76 31 3a 6b 65 79 31 3a b9 4f 7a c8 d3 b0 fb |:v1:key1:.Oz….|
00000050 f9 a8 e5 9f c1 ab 96 d5 09 13 5e 3f 4f 95 2c 44 |……….^?O.,D|
00000060 64 52 7d ef 46 18 45 08 61 b1 4c 0a 4f 9d f7 46 |dR}.F.E.a.L.O..F|
00000070 79 90 7f 5d e3 56 0e 8c 9c ab 7f a8 26 57 5e 0b |y..].V……&W^.|
00000080 0f 94 92 55 ec 9a 5c 97 5a c9 71 d5 79 91 01 a4 |…U..\.Z.q.y…|
00000090 24 b9 64 89 d2 bf 9c 0a 7c e3 88 1a dc ec 46 f2 |$.d…..|…..F.|
000000a0 c5 ef 98 fc 00 a0 35 8c cf 2c 79 8f 07 67 f6 e0 |……5..,y..g..|
000000b0 21 64 09 42 48 c1 5a de f1 00 53 c1 20 86 4b 01 |!d.BH.Z…S. .K.|
000000c0 fc 1c 25 a5 e9 a7 03 4e 2e 53 f8 cb 38 7a fb bd |..%….N.S..8z..|
000000d0 6a 89 98 e5 49 04 d7 55 41 7a 84 0f 68 36 ac d6 |j…I..UAz..h6..|
000000e0 db a5 fc 4e 81 df 0a c3 d8 a0 73 82 22 92 ba a3 |…N……s."…|
000000f0 f8 38 80 e0 eb 37 e1 96 a3 24 b4 4e 2c 9e 56 60 |.8…7…$.N,.V`|
00000100 86 da 59 d2 29 bb af de 86 a2 a4 f8 a5 b2 d7 19 |..Y.)………..|
00000110 d3 db 21 4a ad ad 72 c7 86 de 71 f6 29 a8 61 f0 |..!J..r…q.).a.|
00000120 be 80 44 de 6d 65 95 9b b9 e1 5b 5d 03 3e 6f 8f |..D.me….[].>o.|
00000130 e2 c0 31 08 0e 73 93 bc fd 24 66 5c 61 f6 76 a8 |..1..s…$f\a.v.|
00000140 8a 51 85 68 bc fb a3 ad fa 74 ef be 6a f6 14 85 |.Q.h…..t..j…|
00000150 5d 2e cf ad 41 a4 b6 1b 72 0a |]…A…r.|
0000015a
root@nucklehead:~ #
view raw Data Encryption Test hosted with ❤ by GitHub
root@nucklehead:~ # kubectl create deployment nginx –image=nginx
deployment.apps/nginx created
root@nucklehead:~ # kubectl get pods -l app=nginx
NAME READY STATUS RESTARTS AGE
nginx-f89759699-qqzvg 1/1 Running 0 3m28s
root@nucklehead:~ # POD_NAME=$(kubectl get pods -l app=nginx -o jsonpath="{.items[0].metadata.name}")
root@nucklehead:~ # kubectl port-forward $POD_NAME 8080:80
Forwarding from 127.0.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80
^Z[1] + Suspended kubectl port-forward ${POD_NAME} 8080:80
root@nucklehead:~ # bg
[1] kubectl port-forward ${POD_NAME} 8080:80
root@nucklehead:~ # curl –head http://127.0.0.1:8080
Handling connection for 8080
HTTP/1.1 200 OK
Server: nginx/1.19.5
Date: Tue, 01 Dec 2020 00:41:17 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 24 Nov 2020 13:02:03 GMT
Connection: keep-alive
ETag: "5fbd044b-264"
Accept-Ranges: bytes
root@nucklehead:~ # fg
kubectl port-forward ${POD_NAME} 8080:80
^Croot@nucklehead:~ #
root@nucklehead:~ # kubectl logs $POD_NAME
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
127.0.0.1 – – [01/Dec/2020:00:41:17 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.73.0" "-"
root@nucklehead:~ # kubectl exec -ti $POD_NAME — nginx -v
nginx version: nginx/1.19.5
root@nucklehead:~ #
view raw Deployments hosted with ❤ by GitHub

For the NodePort Service, I just need to connect directly to the worker IP from the hypervisor, because it has a direct route to the cluster network.

root@nucklehead:~ # kubectl expose deployment nginx –port 80 –type NodePort
service/nginx exposed
root@nucklehead:~ # NODE_PORT=$(kubectl get svc nginx \
–output=jsonpath='{range .spec.ports[0]}{.nodePort}')
root@nucklehead:~ # curl -I http://10.10.0.20:${NODE_PORT}/
HTTP/1.1 200 OK
Server: nginx/1.19.5
Date: Tue, 01 Dec 2020 01:30:19 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 24 Nov 2020 13:02:03 GMT
Connection: keep-alive
ETag: "5fbd044b-264"
Accept-Ranges: bytes
root@nucklehead:~ #
view raw gistfile1.txt hosted with ❤ by GitHub

And that’s it!


This series of posts show one way you can run a Kubernetes cluster on FreeBSD using OS-level virtualization so we can create the traditional, supported Linux environment for Kubernetes. My next post will look at some potential alternatives in various stages of development and support.

Sources / References

One thought on “Adventures in Freebernetes: Tripping to the Finish Line

Add yours

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Blog at WordPress.com.

Up ↑

%d bloggers like this: