
I did some research on enabling the Harbor Container Registry with self-signed certificates. This write-up is covering my findings on the limitations of the current vCenter integrated Tanzu service and offers a workaround.
Limitations
In contrast to the multi-cloud TKG implementation, Tanzu integrated service is not customizable, yet. The nice approach documented here is not an option for the latest version. I did a lot of testing around using my own certificate authority and have two open issues so far:
- the HAProxy (2 NIC approach) is not using the certificates entered in the OVA properties. I had to manually replace ( /etc/haproxy ) ca.crt, server.crt and server.key.
- signing the certificate of the management cluster worked according to this blog post works nicely but I could not deploy workload clusters when these certificates were activated.
Meanwhile I learned how to access the management cluster also in the vCenter integrated service and want to invest some more time in debugging these issues. But for now I am working with a self-signed certificate on both, HAProxy and management Cluster.
Adding your own CAs to Tanzu Cluster
After searching the web for some reasonable solutions, I finally found this approach that worked well for distributing the certificates. A DeamonSet is the perfect solution for k8s, because if a node gets replaced or added, the DS will run automatically on this new nodes. I optimized it slightly: some tdnf calls are not really needed ( tdnf update \ntdnf install -y ca-certificates ).
BUT: as containerd does not pick up new certificates dynamically, you have to login into each node and restart containerd. It would be possible to gain access from the container to the underlying node but this is security-wise a nightmare…
How to login to the workload cluster? (niceneasy = management cluster, management = workload):
daniele@ubuntu-dt:~/dev/tkg-extensions/monitoring/prometheus/base-files$ kubectx niceneasy Switched to context "niceneasy". daniele@ubuntu-dt:~/dev/tkg-extensions/monitoring/prometheus/base-files$ kubectl get secrets NAME TYPE DATA AGE ... management-ssh-password Opaque 1 20h ... daniele@ubuntu-dt:~/dev/tkg-extensions/monitoring/prometheus/base-files$ kubectl get secret management-ssh-password -o jsonpath='{.data.ssh-passwordkey}' | base64 -d /ibatflDiwWdmvIyVMR8bqzppG4c18R0w3C4Ww2ixXs= daniele@ubuntu-dt:~/dev/tkg-extensions/monitoring/prometheus/base-files$ kubectx management Switched to context "management". daniele@ubuntu-dt:~/dev/tkg-extensions/monitoring/prometheus/base-files$ kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME management-control-plane-x4h55 Ready master 20h v1.18.10+vmware.1 10.0.10.6 <none> VMware Photon OS/Linux 4.19.154-3.ph3-esx containerd://1.3.4 management-workers-fd557-f4fd84dc6-fqtlt Ready <none> 20h v1.18.10+vmware.1 10.0.10.7 <none> VMware Photon OS/Linux 4.19.154-3.ph3-esx containerd://1.3.4 management-workers-fd557-f4fd84dc6-qmz64 Ready <none> 20h v1.18.10+vmware.1 10.0.10.9 <none> VMware Photon OS/Linux 4.19.154-3.ph3-esx containerd://1.3.4 management-workers-fd557-f4fd84dc6-rfgg5 Ready <none> 20h v1.18.10+vmware.1 10.0.10.8 <none> VMware Photon OS/Linux 4.19.154-3.ph3-esx containerd://1.3.4 #for each node with the same password: ssh vmware-system-user@10.0.10.7 vmware-system-user@management-workers-fd557-f4fd84dc6-rfgg5 [ ~ ]$ sudo systemctl restart containerd vmware-system-user@management-workers-fd557-f4fd84dc6-rfgg5 [ ~ ]$ exit
Using your CA with the TGK extensions
First stop is the installation of the Cert-Manager. I chose to create a CA Issuer on cluster level – I need only one cluster-wide, so first I created a tls secret within the namespace of the cert-manager:
kubens cert-manager #switch namespace kubectl create secret tls niceneasy-ca --cert=/home/daniele/CA/ca.niceneasy.ch.crt --key=/home/daniele/CA/ca.niceneasy.ch.key
The next step is to create a manifest for the ClusterIssuer:
apiVersion: cert-manager.io/v1beta1 kind: ClusterIssuer metadata: name: niceneasy-ca spec: ca: secretName: niceneasy-ca
This is an older version of the cert-manager, it might be updated from v1beta1 to v1 soon.
A cluster issuer can be referenced from any namespace, you just create a certificate manifest and use it:
apiVersion: cert-manager.io/v1alpha2 kind: Certificate metadata: name: prometheus-tls-cert namespace: tanzu-system-monitoring spec: secretName: prometheus-tls duration: 87600h renewBefore: 360h organization: - Project Prometheus commonName: prometheus isCA: false keySize: 2048 keyAlgorithm: rsa keyEncoding: pkcs1 usages: - server auth - client auth dnsNames: - prometheus.tanzu.ne.local - notary.prometheus.tanzu.ne.local ipAddresses: [] issuerRef: name: niceneasy-ca kind: ClusterIssuer group: cert-manager.io
Adapting the Services to use the CA
Apart from the adoptions I already documented in the previous parts of my series on TKG extensions, you need now to throw out the generation of self-signed certificates for each component. I did this by generating the yaml into a file, copied all generated Certificate manifests and changed the issuer to my ClusterIssuer. I think I am not allowed to put all files in a public GIT repo, so I exported a PDF with all changes from a pull request on the original.
Testing the Harbor Registry
If you have successfully deployed the registry and imported the CA into the OS of your workplace, you should see a nice effect:

Now let’s create the latest version for our kube stats metrics component deployed with prometheus. You will need GIT and Docker for the full process.
Checkout the GIT repo, switch to tag v2.0.0-beta and start the build in the root with make container.
daniele@ubuntu-dt:~/dev/kube-state-metrics$ make container docker build --pull -t gcr.io/k8s-staging-kube-state-metrics/kube-state-metrics-amd64:v2.0.0-beta --build-arg GOVERSION=1.15.3 --build-arg GOARCH=amd64 . Sending build context to Docker daemon 58.03MB Step 1/12 : ARG GOVERSION=1.15 Step 2/12 : FROM golang:${GOVERSION} as builder 1.15.3: Pulling from library/golang Digest: sha256:1ba0da74b20aad52b091877b0e0ece503c563f39e37aa6b0e46777c4d820a2ae Status: Image is up to date for golang:1.15.3 ---> 4a581cd6feb1 Step 3/12 : ARG GOARCH ---> Using cache ---> 03ddeef2da52 Step 4/12 : ENV GOARCH=${GOARCH} ---> Using cache ---> c771d63d96f0 Step 5/12 : WORKDIR /go/src/k8s.io/kube-state-metrics/ ---> Using cache ---> baeb20636586 Step 6/12 : COPY . /go/src/k8s.io/kube-state-metrics/ ---> 93b1f7fcd116 Step 7/12 : RUN make build-local ---> Running in 0dcb6e9ad209 GOOS=linux GOARCH=amd64 CGO_ENABLED=0 go build -ldflags "-s -w -X k8s.io/kube-state-metrics/v2/pkg/version.Release=v2.0.0-beta -X k8s.io/kube-state-metrics/v2/pkg/version.Commit=baacc7bf -X k8s.io/kube-state-metrics/v2/pkg/version.BuildDate=2021-01-26T17:57:37Z" -o kube-state-metrics Removing intermediate container 0dcb6e9ad209 ---> dc4fb7906cea Step 8/12 : FROM gcr.io/distroless/static:latest latest: Pulling from distroless/static Digest: sha256:b579e0c8d9ef85e3d2bb184af02cd0ea9d90b6d3b02adaa8bacea1cb346fab51 Status: Image is up to date for gcr.io/distroless/static:latest ---> 7b715d870c4e Step 9/12 : COPY --from=builder /go/src/k8s.io/kube-state-metrics/kube-state-metrics / ---> 7e9baa39d6a3 Step 10/12 : USER nobody ---> Running in 54322a871871 Removing intermediate container 54322a871871 ---> bcd9a42a1d6d Step 11/12 : ENTRYPOINT ["/kube-state-metrics", "--port=8080", "--telemetry-port=8081"] ---> Running in ca41d235d3b9 Removing intermediate container ca41d235d3b9 ---> a31872f3d726 Step 12/12 : EXPOSE 8080 8081 ---> Running in 9e7e94a034cd Removing intermediate container 9e7e94a034cd ---> 0649e55685d9 Successfully built 0649e55685d9 Successfully tagged gcr.io/k8s-staging-kube-state-metrics/kube-state-metrics-amd64:v2.0.0-beta
Now you have a new image in your docker system. Let’s tag it for the new repository:
daniele@ubuntu-dt:~/dev/kube-state-metrics$ docker tag 0649e55685d9 harbor.ne.local/library/kube-state-metrics-amd64:v2.0.0-beta daniele@ubuntu-dt:~/dev/kube-state-metrics$ docker login harbor.ne.local Username: daniele Password: WARNING! Your password will be stored unencrypted in /home/daniele/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded daniele@ubuntu-dt:~/dev/kube-state-metrics$ docker push harbor.ne.local/library/kube-state-metrics-amd64:v2.0.0-beta The push refers to repository [harbor.ne.local/library/kube-state-metrics-amd64] 33ffa9084a05: Pushed 728501c5607d: Layer already exists v2.0.0-beta: digest: sha256:bd0991b944af386f3940b83a4b183dc327627affa733552466f4932bfccc5158 size: 738
I took the ID of the newly created image and tagged it with the name (fqdn) of my repo ( harbor.ne.local ) – /library is the name of the already created project in harbor. Then just login with the fqdn. If you have imported the CA to your system, this should work without warning – otherwise you’ll get an certificate error here. If the image is successfully pushed, you should see it in the project library:

Now you have to enable kubernetes to login into your registry. This is done by creating a secret. During the login on the CLI a configuration file was created containing your basic auth. Let’s use this to create an image pull secret:
daniele@ubuntu-dt:~/dev/kube-state-metrics$ kubens tanzu-system-monitoring Context "management" modified. Active namespace is "tanzu-system-monitoring". daniele@ubuntu-dt:~/dev/kube-state-metrics$ kubectl create secret generic harbor-creds --from-file=.dockerconfigjson=/home/daniele/.docker/config.json --type=kubernetes.io/dockerconfigjson secret/harbor-creds created
Now you have to add this to the deployment of the kube stats metrics:
spec: serviceAccountName: prometheus-kube-state-metrics imagePullSecrets: - name: harbor-creds containers: - name: prometheus-kube-state-metrics image: "harbor.ne.local/library/kube-state-metrics-amd64:v2.0.0-beta" imagePullPolicy: "IfNotPresent"
Now just redeploy with kubectl apply -f 20-kube-state-metrics-deployment.yaml and kubernetes should be able to work with this registry:
daniele@ubuntu-dt:~/dev/tkg-extensions/monitoring/prometheus/base-files$ kubectl describe pod prometheus-kube-state-metrics-7665dd94d6-zhmdk ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled <unknown> default-scheduler Successfully assigned tanzu-system-monitoring/prometheus-kube-state-metrics-7665dd94d6-zhmdk to management-workers-fd557-f4fd84dc6-qmz64 Normal Pulling 25s kubelet, management-workers-fd557-f4fd84dc6-qmz64 Pulling image "harbor.ne.local/library/kube-state-metrics-amd64:v2.0.0-beta" Normal Pulled 24s kubelet, management-workers-fd557-f4fd84dc6-qmz64 Successfully pulled image "harbor.ne.local/library/kube-state-metrics-amd64:v2.0.0-beta" Normal Created 23s kubelet, management-workers-fd557-f4fd84dc6-qmz64 Created container prometheus-kube-state-metrics Normal Started 23s kubelet, management-workers-fd557-f4fd84dc6-qmz64 Started container prometheus-kube-state-metrics
Leave a Reply