Kubernetes on Debian Buster

This is some WIP. Setup GitLab in Kubernetes (k8s) test cluster using the helm chart:

Setup k8s using kubespray

Setup a new cluster with 2 nodes:


[ -d kubespray ] ||
	git clone https://github.com/kubernetes-sigs/kubespray.git ./kubespray
cd ./kubespray

[ -d inventory/univention ] ||
	cp -r inventory/sample inventory/univention

declare -a IPS=(
CONFIG_FILE=inventory/univention/hosts.yml \
	python3 contrib/inventory_builder/inventory.py "${IPS[@]}"

vi inventory/univention/group_vars/all/all.yml
vi inventory/univention/group_vars/k8s-cluster/k8s-cluster.yml
# kubeconfig_localhost: true
# kubectl_localhost: true
# kube_proxy_strict_arp: true
vi inventory/univention/group_vars/k8s-cluster/addons.yml
# helm_enabled: true
# metrics_server_enabled: true
# metrics_server_kubelet_insecure_tls: true
# metrics_server_kubelet_preferred_address_types: "InternalIP"

ansible-playbook \
	-i inventory/univention/hosts.yml \
	--become --become-user=root \

kubectl -n kube-system get deployments
cd ..

Configure k8s for GitLab

(This is from https://docs.gitlab.com/ee/user/project/clusters/):

Get the k8s API URL:

kubectl cluster-info |
	awk '/Kubernetes master.*http/ {print $NF}'

Get the CA certificate:

kubectl get secrets
kubectl get secret default-token-<secret name> -o jsonpath="{['data']['ca\.crt']}" |
	base64 --decode

Create cluster-admin account:

  1. Create a file called gitlab-admin-service-account.yaml with contents:

    apiVersion: v1
    kind: ServiceAccount
      name: gitlab
      namespace: kube-system
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
      name: gitlab-admin
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: cluster-admin
           - kind: ServiceAccount
      name: gitlab
      namespace: kube-system

(see contrib/misc/clusteradmin-rbac.yml)

  1. Apply the service account and cluster role binding to your cluster:

    kubectl apply -f gitlab-admin-service-account.yaml
  2. Retrieve the token for the gitlab-admin service account

    kubectl -n kube-system describe secret \
     $(kubectl -n kube-system get secret | awk '/^gitlab/{print $1}') |
     sed -ne 's/^token: *//p'

Setup GitLab runner

Create a customized GitLab runner, which includes our custom SSL certificate used by our internal Docker registry. Due to GitLab Issue 3968 we also have to setup the same certificate for the Runner to be able to access the GitLab master instance.

[ -f gitlab-runner ] ||
cd gitlab-runner

[ -s ucs-too-ca-.crt ] ||
	wget --no-check-certificate wget https://nissedal.knut.univention.de/ucs-root-ca.crt
kubectl create secret generic ca --from-file=ucs-root-ca.crt

vi values.yaml
# gitlabUrl: https://git.knut.univention.de/
# runnerRegistrationToken: "XXXXXXXXXXXXXXXXXXXX"
# certsSecretName: ca
# rbac:
#   create: true
# runners:
#   image: docker-registry.knut.univention.de/phahn/ucs-minbase:latest
#   imagePullPolicy: "always"
#   locked: false
#   tags: "docker"
#   privileged: true
# envVars:
#     value: /home/gitlab-runner/.gitlab-runner/certs/ucs-root-ca.crt
#   - name: CONFIG_FILE
#     value: /home/gitlab-runner/.gitlab-runner/config.toml
#     value: true

helm repo add gitlab https://charts.gitlab.io
helm init --client-only
helm install --name gitlab-runner -f values.yaml gitlab/gitlab-runner
helm status gitlab-runner

# NAMESPACE='kube-system'
helm list
helm upgrade -f values.yaml gitlab-runner gitlab/gitlab-runner --version 0.14.0
helm upgrade -f values.yaml gitlab-runner ~/REPOS/VIRT/gitlab-runner

cd ..


Missing Service account

ERROR: Job failed (system failure): pods is forbidden: User “system:serviceaccount:default:default” cannot create resource “pods” in API group “” in the namespace “default”

Enable rbac/create=true in values.yaml for helm to create the role automatically.

Docker image not pulled

ERROR: Job failed: image pull failed: Back-off pulling image “docker-registry.knut.univention.de/phahn/ucs-minbase:latest”

Missing SSL CA certificate on host system, where dockerd tries to pull the image.

cd /usr/local/share/ca-certificates
[ -s ucs-too-ca-.crt ] ||
	wget --no-check-certificate wget https://nissedal.knut.univention.de/ucs-root-ca.crt
systemctl restart docker.service


You need a bearer token, which you can retrieve via kubectl:

(see contrib/misc/clusteradmin-rbac.yml)

kubectl get clusterRoleBindings
# ...
# tiller                                                 110d
# tiller-admin                                           110d
kubectl describe serviceaccount tiller -n kube-system
# Mountable secrets:   tiller-token-wshmm
# Tokens:              tiller-token-wshmm
kubectl describe secret tiller-token-wshmm -n kube-system
# token:      ...

Load balancer

By default k8s does not provide a load balancer implementation: Many cloud providers provide that service out-of-the-box. If k8s runs on bare-metal servers or in your own virtual machines, you must provide that service manually. One option is to use MetalLB.

ansible-playbook \
	-i inventory/univention/hosts.yml \
	--become --become-user=root \
	-e metallb.ip_range= \

Infinite firewall rule spamming

Because of Issue 82361 k8s adds new firewall rules to the DROP table, which slows down the system. For Debian Buster the iptables program must be switched back to the legacy version:

update-alternatives --set iptables /usr/sbin/iptables-legacy
iptables -F DROP

Single node cluster and upgrades

kubespray fails to upgrade a single node cluster as it drains and cordons the single node. Essential services like CoreDNS are then no longer running and the update fails in roles/kubernetes/master/tasks/kubeadm-upgrade.yml. You explicitly need to disable that on the command-line:

ansible-playbook -b -i inventory/univention/hosts.yml upgrade-cluster.yml --skip-tags pre-upgrade,post-upgrade # -D -e kube_version=v1.23.7

Also see kubeadm upgrade,

  1. Update inventory/univention/group_vars/k8s-cluster/k8s-cluster.yml:

     kube_version: v1.23.7

Multiple versions

Check kubelet_checksums around roles/download/defaults/main.yml:186 which versions are supported by kubespray.

Written on August 28, 2019