K8s on Debian12

Page content

Install Debian 12

or install Debian 11.7 and Upgrade to 12

Setup

3 Nodes

192.168.100.151     k8s-master
192.168.100.152     k8s-worker1
192.168.100.153     k8s-worker2

Locale

export LC_CTYPE=en_US.UTF-8
export LC_ALL=en_US.UTF-8

Kubernetes

https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"

Swap Off

swapoff -a
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

Install FW

apt-get install ufw
ufw enable

Master

ufw allow 22/tcp
ufw allow 6443/tcp
ufw allow 2379/tcp
ufw allow 2380/tcp
ufw allow 10250/tcp
ufw allow 10251/tcp
ufw allow 10252/tcp
ufw allow 10255/tcp
ufw reload

Worker

ufw allow 22/tcp
ufw allow 10250/tcp
ufw allow 30000:32767/tcp
ufw reload

Containerd

cat << EOF >> /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF

modprobe overlay
modprobe br_netfilter

cat << EOF >> /etc/sysctl.d/99-kubernetes-k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF

sysctl --system

Containerd

apt update
apt -y install containerd

Adapt Containerd to Kubernetes

containerd config default > /etc/containerd/config.toml >/dev/null 2>&1

Update config.toml

sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml

Restart Containerd

systemctl enable containerd
systemctl restart containerd

add Kubernetes

apt install gnupg gnupg2 curl software-properties-common -y
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg |gpg --dearmour -o /etc/apt/trusted.gpg.d/cgoogle.gpg
apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"

Kubectl

apt update
apt install kubelet kubeadm kubectl -y
apt-mark hold kubelet kubeadm kubectl

Kube Init on MASTER

kubeadm init --control-plane-endpoint=k8s-master

Downgrade to 1.26

Version 1.27 seems not production ready, so, you may have to downgrade it :(

Uninstall 1.27

unlock, uninstall, clean cache

apt-mark unhold kubelet kubeadm kubectl
dpkg --remove kubelet kubeadm kubectl
apt autoremove

Show Package Versions

show all possible versions -> take the last version from 1.26

apt-cache showpkg kubelet

Install v1.26.4

reinstall and lock the version

v="1.26.4-00"
apt install kubelet=${v} kubeadm=${v} kubectl=${v} -y
apt-mark hold kubelet kubeadm kubectl

Install Calico Pod Network Addon

Init New Cluster

sudo kubeadm init --pod-network-cidr=192.168.0.0/16

Configure KubeCTL for your User

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Install the Tigera Calico operator

kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.0/manifests/tigera-operator.yaml

Install Calico

kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.0/manifests/custom-resources.yaml

Check Status

watch kubectl get pods -n calico-system

result

Every 2.0s: kubectl get pods -n calico-system

NAME                                       READY   STATUS              RESTARTS      AGE
calico-kube-controllers-66bb548454-c9h9h   0/1     ContainerCreating   0             105s
calico-node-vmrls                          0/1     Running             0             105s
calico-node-xtfds                          0/1     PodInitializing     0             105s
calico-typha-bb96cdfbc-4hlpg               0/1     CrashLoopBackOff    1 (12s ago)   105s
csi-node-driver-8djtz                      0/2     ContainerCreating   0             105s
csi-node-driver-fwqxx                      2/2     Running             2 (4s ago)    105s

Remove Taints

kubectl taint nodes --all node-role.kubernetes.io/control-plane-
kubectl taint nodes --all node-role.kubernetes.io/master-

Update FW

ufw allow 179/tcp
ufw allow 4789/udp
ufw allow 51820/udp
ufw allow 51821/udp
ufw reload

Get Nodes

kubectl get nodes
NAME        STATUS   ROLES           AGE   VERSION
worker1     Ready    <none>          21m   v1.26.4
worker2     Ready    <none>          14m   v1.26.4
master      Ready    control-plane   22m   v1.26.4

Troubles

Crashing Pods

kubectl get pods -A

result

NAMESPACE         NAME                                       READY   STATUS                  RESTARTS         AGE
calico-system     calico-kube-controllers-66bb548454-c9h9h   0/1     CrashLoopBackOff        4 (97s ago)      12m
calico-system     calico-node-vmrls                          0/1     Running                 1 (8m50s ago)    12m
calico-system     calico-node-xtfds                          0/1     Init:CrashLoopBackOff   7 (4m14s ago)    12m
calico-system     calico-typha-bb96cdfbc-4hlpg               1/1     Running                 6 (3m18s ago)    12m
calico-system     csi-node-driver-8djtz                      2/2     Running                 2 (28s ago)      12m
calico-system     csi-node-driver-fwqxx                      2/2     Running                 7 (9m21s ago)    12m
kube-system       coredns-5d78c9869d-bwjpj                   0/1     Running                 0                15m
kube-system       coredns-5d78c9869d-zh2rt                   0/1     CrashLoopBackOff        2 (23s ago)      15m
kube-system       etcd-k8s-01                                1/1     Running                 48 (111s ago)    16m
kube-system       kube-apiserver-k8s-01                      1/1     Running                 42 (2m49s ago)   16m
kube-system       kube-controller-manager-k8s-01             0/1     CrashLoopBackOff        8 (52s ago)      16m
kube-system       kube-proxy-5bp2t                           0/1     CrashLoopBackOff        3 (14s ago)      12m
kube-system       kube-proxy-grhdw                           0/1     CrashLoopBackOff        5 (18s ago)      15m
kube-system       kube-scheduler-k8s-01                      1/1     Running                 48 (4m14s ago)   16m
tigera-operator   tigera-operator-58f95869d6-nm6lq           0/1     Error                   8 (67s ago)      13m

Any Comments ?

sha256: 0a3ee8e398b4608062b6f4c6f5c0a3c50d371e5925cb30f8bb180d250dfe99f8