Page 4: Provisioning a CA and Generating TLS Certificates
You can follow all the original steps in this section, with a few exceptions.
Note: You can use any common names, locations, etc., in the JSON certificate signing request (CSR) files, as long as they are consistent and match your CA (certificate authority).
In The Kubelet Client Certificates section:
for instance in worker-0 worker-1 worker-2; do | |
cat > ${instance}-csr.json <<EOF | |
{ | |
"CN": "system:node:${instance}", | |
"key": { | |
"algo": "rsa", | |
"size": 2048 | |
}, | |
"names": [ | |
{ | |
"C": "US", | |
"L": "At Home", | |
"O": "system:nodes", | |
"OU": "Kubernetes The Harder Way", | |
"ST": "California" | |
} | |
] | |
} | |
EOF | |
INTERNAL_IP=$(host ${instance} | awk '{print $4}') | |
cfssl gencert \ | |
-ca=ca.pem \ | |
-ca-key=ca-key.pem \ | |
-config=ca-config.json \ | |
-hostname=${instance},${INTERNAL_IP} \ | |
-profile=kubernetes \ | |
${instance}-csr.json | cfssljson -bare ${instance} | |
done |
The Kubernetes API Server Certificate section:
KUBERNETES_PUBLIC_ADDRESS=10.240.0.2 | |
KUBERNETES_HOSTNAMES=kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.svc.cluster.local,kubernetes.hardk8s.local | |
cat > kubernetes-csr.json <<EOF | |
{ | |
"CN": "kubernetes", | |
"key": { | |
"algo": "rsa", | |
"size": 2048 | |
}, | |
"names": [ | |
{ | |
"C": "US", | |
"L": "At Home", | |
"O": "Kubernetes", | |
"OU": "Kubernetes The Harder Way", | |
"ST": "California" | |
} | |
] | |
} | |
EOF | |
cfssl gencert \ | |
-ca=ca.pem \ | |
-ca-key=ca-key.pem \ | |
-config=ca-config.json \ | |
-hostname=10.32.0.1,10.240.0.10,10.240.0.11,10.240.0.12,${KUBERNETES_PUBLIC_ADDRESS},127.0.0.1,${KUBERNETES_HOSTNAMES} \ | |
-profile=kubernetes \ | |
kubernetes-csr.json | cfssljson -bare kubernetes |
Distribute the Client and Server Certificates section:
for instance in worker-0 worker-1 worker-2; do | |
scp -i ~cbsd/.ssh/id_rsa -oStrictHostKeyChecking=no ca.pem ${instance}-key.pem ${instance}.pem ubuntu@${instance}:~/ | |
done | |
for instance in controller-0 controller-1 controller-2; do | |
scp -i ~cbsd/.ssh/id_rsa -oStrictHostKeyChecking=no ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \ | |
service-account-key.pem service-account.pem ubuntu@${instance}:~/ | |
done |
What a fantastic and interesting job you’ve done! I will definitely try!
Question – as far as I understand, you are not using any K8S CNI ( calico, flannel, … ). How your cluster works with multiple nodes ( ip address for pod, connectivity ? )
LikeLike
It is actually using a CNI plugin (https://github.com/containernetworking/plugins) although it just creates a basic bridge for the container network. Most CNI plugins should work fine on this cluster, which does actually have three worker nodes, and I’ve tested pod connectivity between nodes. A simple test for full CNI functionality would be to install Calico and test a NetworkPolicy.
LikeLike