Deploy Kubernetes Cluster on CentOS Stream with Containerd
If you want to build a fault-tolerant, auto-scaling cloud infrastructure, then Kubernetes is undoubtedly your best choice.
It’s an open-source platform for running containerized applications inside a self-healing, distributed cluster. The true beauty of Kubernetes lies in its simplicity; you can create a Kubernetes cluster for hundreds of applications, distributed across tens of servers, using a few YAML configuration files.
Kubernetes does all the heavy lifting itself. Once you tell it how you want your infrastructure to function, it improvises and adapts to ensure that the desired balance is always kept. It can restart crashed pods, shift workloads from a dead node to another, and auto-scale the cluster by spawning pods as necessary.
In the following article, we will share a complete guide to installing Kubernetes across two CentOS Stream 8 machines.
What is a Kubernetes cluster?
A Kubernetes cluster is made up of at least two nodes: 1 master and 1 worker. In most production scenarios, there are usually several masters and several worker nodes running.
As the name indicates, the master is the overseer of all administration and management needs of the cluster. It’s responsible for scheduling, maintaining cluster state, replacing crashed pods, and distributing workload across worker nodes.
The worker nodes are the true warriors of a Kube cluster. They house the pods that run the user applications. A Kubernetes worker node has the following components:
kube-proxy
A network proxy that ensures efficient communication between all the pods of a cluster.
kubelet
Think of kubelet as the agent that starts the pods, maintains their lifetime, and reports their state to the master node.
Container Runtime
The piece of software responsible for spinning up containers, and allowing them to interface with the operating system.
It’s worth noting that Docker was the primary container Runtime until it was deprecated in Kubernetes version 1.20. In version 1.24, the Kubernetes team officially removed the dockershim
component from Kubelet.
It’s recommended to use either containerd or CRI-O as the runtime for Kubernetes.
However, if you still prefer to use the Docker engine as your runtime, you can set up cri-dockerd, which provides you an interface between Docker and the Kubernetes Container Runtime Interface (CRI).
In this article, we will be using containerd as the container runtime.
Step 1. Install containerd
Before we start installing Kubernetes components, we need to install containerd on both machines
.
Follow these steps:
Configure prerequisites
Load two required modules and add configuration to make them loadable at boot time.
sudo modprobe overlay
sudo modprobe br_netfilter
cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF
Set up other prerequisites.
cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
Make the above settings applicable without restarting.
sudo sysctl –system
Install containerd
Add the official Docker repository.
sudo dnf config-manager --add-repo=https://download.docker.com/linux/centos/docker-ce.repo
Update your system and install the containerd package.
sudo dnf update
sudo dnf install -y containerd
- Create a configuration file for containerd and set it to default.
sudo mkdir -p /etc/containerd
sudo containerd config default | sudo tee /etc/containerd/config.toml
Set cgroupdriver to systemd
kubelet requires the cgroupdriver
to be set to systemd
. To do so, edit the following file:
sudo vi /etc/containerd/config.toml
Find the following section: [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
And change the value of SystemdCgroup
to true
Once you are done, match the section in your file to the following:
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options] BinaryName = "" CriuImagePath = "" CriuPath = "" CriuWorkPath = "" IoGid = 0 IoUid = 0 NoNewKeyring = false NoPivotRoot = false Root = "" ShimCgroup = "" SystemdCgroup = true
Restart containerd
To apply the changes made in the last step, restart containerd.
sudo systemctl restart containerd
Verify that containerd is running using this command:
ps -ef | grep containerd
If it’s indeed running, you should get the following output:
root 63087 1 0 13:16 ? 00:00:00 /usr/bin/containerd
Step 2. Install Kubernetes
At this point, we are ready to install Kubernetes on our machines. Repeat all steps on both machines
. Let’s begin.
Install curl
sudo dnf install curl
Add the Kubernetes repository
cat < /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
Install modules
Update your machines and then install all Kubernetes modules.
sudo dnf update
sudo dnf install -y kubelet kubeadm kubectl
Set hostnames for master and worker
Set hostnames on both machines.
On the master node:
sudo hostnamectl set-hostname "master-node"
exec bash
And on the worker node:
sudo hostnamectl set-hostname "worker-node"
exec bash
Make sure you enter the correct hostnames in the /etc/hosts file of both nodes. Remember to replace the IPs with those of your own machines
sudo cat <> /etc/hosts 160.129.148.40 master-node 153.139.228.122 node1 worker-node EOF
Config firewalls
Add the following firewall rules on the master node:
sudo ufw allow 6443/tcp
sudo ufw allow 2379/tcp
sudo ufw allow 2380/tcp
sudo ufw allow 10250/tcp
sudo ufw allow 10251/tcp
sudo ufw allow 10252/tcp
sudo ufw allow 10255/tcp
sudo ufw reload
Add these rules to the worker node:
sudo ufw allow 10251/tcp sudo ufw allow 10255/tcp sudo ufw reload
If you don’t have ufw
installed, you can install and enable it using these commands:
sudo dnf install epel-release -y sudo dnf install ufw -y sudo ufw enable
If you wish, you can also add similar firewall rules using a different tool.
Turn off swap
Turn swap off for both machines.
sudo swapoff –a
Enable kubelet
Enable the kubelet service on both machines.
sudo systemctl enable kubelet
Step 3. Deploy the cluster
Initialise cluster
We are finally ready to initialise our cluster. Execute this command on the master node:
sudo kubeadm init
Wait a few minutes for it to finish. A successful initialisation will yield an output similar to this:
Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 102.130.122.165:6443 --token uh9zuw.gy0m40a90sd4o3kl \ --discovery-token-ca-cert-hash sha256:24490dd585768bc80eb9943432d6beadb3df40c9865e9cff03659943b57585b2
Copy the kubeadm join
command from the end of the output and save it in a safe place. We will use this command later to allow the worker node to join the cluster.
If you forget to copy the command, or can’t find it anymore, you can regenerate it by using the following command:
sudo kubeadm token create --print-join-command
Create and claim directory
Also indicated by the above output, we need to create a directory and claim its ownership. Run these commands:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Deploy pod network to cluster
Next up, we need to deploy a pod network to our cluster.
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Expect an output like this:
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
Verify that the master node is ready now:
sudo kubectl get nodes
Output NAME STATUS ROLES AGE VERSION master-node Ready control-plane 2m50s v1.24.1
At this stage, it’s also recommended to check whether all the pods are running properly.
kubectl get pods --all-namespaces
You should get an output like this:
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-64897985d-5r6zx 0/1 Running 0 22m
kube-system coredns-64897985d-zplbs 0/1 Running 0 22m
kube-system etcd-master-node 1/1 Running 0 22m
kube-system kube-apiserver-master-node 1/1 Running 0 22m
kube-system kube-controller-manager-master-node 1/1 Running 0 22m
kube-system kube-flannel-ds-brncs 0/1 Running 0 22m
kube-system kube-flannel-ds-vwjgc 0/1 Running 0 22m
kube-system kube-proxy-bvstw 1/1 Running 0 22m
kube-system kube-proxy-dnzmw 1/1 Running 0 20m
kube-system kube-scheduler-master-node 1/1 Running 0 22m
Add worker node
Now is the time to move to our worker node. Run your own kubeadm join
command from Step 3 on the worker node to make it join the cluster.
kubeadm join 102.130.122.165:6443 --token uh9zuw.gy0m40a90sd4o3kl \ --discovery-token-ca-cert-hash sha256:24490dd585768bc80eb9943432d6beadb3df40c9865e9cff03659943b57585b2
Expect the output to have the following lines at the end:
This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
Switch back to the master node and run this command to confirm that the worker has indeed joined the cluster:
kubectl get nodes
NAME STATUS ROLES AGE VERSION master-node Ready control-plane,master 3m40s v1.24.1 worker-node Ready 83s v1.24.1
Set the role for your worker node.
kubectl label node worker-node node-role.kubernetes.io/worker=worker
To verify that the role was set:
kubectl get nodes
NAME STATUS ROLES AGE VERSION master-node Ready control-plane,master 5m12s v1.24.1 worker-node Ready worker 2m55s v1.24.1
That’s it! Our 1-master-1-worker Kubernetes cluster is ready!
To add more nodes, simply repeat this step on other machines.