A Guide to Building a Kubernetes Cluster with Raspberry Pi’s

Alexander Sniffin
11 min readJul 5

A few years ago, I set up a Kubernetes Cluster on Raspberry Pi’s. At the time, the ARM architecture of Raspberry Pi’s posed some challenges. Finding applications that supported ARM was a tough task which often led me to having to manually build my own applications and containers for anything I wanted to use.

However, since then, things have significantly improved! The advent of a new 64-bit Raspberry Pi OS and the growing popularity of ARM in the industry, largely due to its cost-effectiveness for cloud deployments, have made building a Raspberry Pi cluster much simpler. I decided to rebuild my cluster, updating it to a 64-bit OS and the latest versions of both Kubernetes and Docker.

I’ve put together a guide on how you can bootstrap your own Raspberry Pi Kubernetes cluster. I hope it proves useful in your journey of building a home cluster! 🚀


You’ll need some hardware for setting up the cluster, this includes:

  • Some Pi’s (I used the 4 model B)
  • An SD card per Pi
  • Ethernet Cables
  • A router and/or network switch
  • USB hub
  • (optional) A case

This guide was written for Kubernetes 1.26.6, Docker 24.0.2 and Raspberry Pi Lite (64-bit) Bullseye.

OS Setup

For the first step, we’ll need to set up the OS on all of the Pi’s. Without it, the Raspberry Pi has no system to boot by default.

Download the Raspberry Pi Imager, a handy application for downloading and flashing Raspberry Pi’s. For this guide we will use the 64 bit headless version of Raspberry Pi OS (a fork of Debian).

This will work with the latest Raspberry Pi’s but still check the compatibility before you flash your SD card.

Raspberry Pi Imager

Choose your SD card and begin flashing it with the OS. Repeat this for each SD card until they’re all complete.

Enable SSH and Create a Default User

You’ll need to set up SSH as it’ll allow us to remotely configure each Pi.

Create an empty file named ssh (without any extension) in the boot partition of the SD card to enable SSH.

For setting up the user to login with, create a file called userconf in the same boot partition of the SD card. This file should contain a single line of text, consisting of {name}:{encrypted-password}. I used node for my login user but use what you want.

To generate the encrypted-password, run the following command with OpenSSL:

echo '{password}' | openssl passwd -6 -stdin

Save the file and eject the SD card. Insert the SD card into the Raspberry Pi and power it on. Make sure it’s connected to your router or network switch on your private network.

First Boot and Initial Configuration

You’ll need to get the IP for your Raspberry Pi, to do this you can check your router. I use OpenWrt, from my DHCP settings I create a static IP for each that‘s easy for me to remember.

DHCP Settings

SSH into your first node, this will be your master node that runs the control plane of your cluster. After we’ve tunneled into our Pi, we can start setting it up!

Add your user to the sudo group with the following command.

sudo usermod -aG sudo node

Now lets update the rasp-config to autoboot with the node user.

sudo raspi-config
Raspberry Pi Config Menu

Navigate to “System Options” → “Boot / Auto Login” and choose “Console Autologin”.

Docker & Kubernetes Initial Set Up

By default the cgroup memory option will be disabled, we will need to update this for Docker to be able to limit memory usage. Open /boot/cmdline.txt and append cgroup_enable=memory cgroup_memory=1.

Now lets update our apt repository and include the Kubernetes repository.

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list
sudo apt update && sudo apt upgrade -y

Docker install:

curl -sSL https://get.docker.com | sh
sudo usermod -aG docker node

As of Kubernetes 1.20, dockershim is being deprecated. There is an open-source CRI we can use in exchange for our cluster provided by Mirantis called cri-dockerd. To install cri-dockerd and set up the service, run the following commands

wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.4/cri-dockerd-0.3.4.arm64.tgz
tar -xvzf cri-dockerd-0.3.4.arm64.tgz
sudo mv cri-dockerd /usr/bin/cri-dockerd
sudo chmod +x /usr/bin/cri-dockerd
wget https://raw.githubusercontent.com/Mirantis/cri-dockerd/master/packaging/systemd/cri-docker.service
wget https://raw.githubusercontent.com/Mirantis/cri-dockerd/master/packaging/systemd/cri-docker.socket
sudo mv cri-docker.service /etc/systemd/system/
sudo mv cri-docker.socket /etc/systemd/system/
sudo systemctl enable cri-docker.service
sudo systemctl enable cri-docker.socket
sudo systemctl start cri-docker.service
sudo systemctl start cri-docker.socket

It’s recommended to disable swap on our nodes for the Kubernetes scheduler.

sudo dphys-swapfile swapoff && sudo dphys-swapfile uninstall && sudo systemctl disable dphys-swapfile

Finally, lets install Kubernetes!

sudo apt install -y kubelet=1.26.6-00 kubeadm=1.26.6-00 kubectl=1.26.6-00
sudo apt-mark hold kubelet kubeadm kubectl

For this guide, I’ve tested everything running 1.26.6, previous versions won’t work correctly before 1.24. We’ll mark these packages to prevent them from being updated.

Alternatively, k3s made by Rancher Labs would be a good lightweight option. Some of the advantages include a small binary size, very low resource requirements and it’s optimized for ARM. I haven’t tested it for this guide but I imagine the set up would be similar after this.

Time to initialize our cluster, to do this we’ll create a file with our InitConfiguration and ClusterConfiguration settings.

apiVersion: kubeadm.k8s.io/v1beta3
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: {token}
- signing
- authentication
kind: InitConfiguration
bindPort: 6443
criSocket: unix:///var/run/cri-dockerd.sock
imagePullPolicy: IfNotPresent
name: node-0
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
podSubnet: "" # --pod-network-cidr

This file includes our master node’s configuration, note the criSocket will use cri-dockerd and we set our network CIDR for later.

To initialize our control plane on this node, run the following.

sudo kubeadm init --config kubeadm-config.yaml

This will output the settings for joining new nodes to cluster as well as setting up your kube-config.

Set up your kube-config following the instructions from the command and copy and save both the kube-config and join command on your workstation, we’ll need these for later!

Cluster Networking

Now we need to set up networking in our cluster. For Pods to be able to communicate with each other across our nodes, a network plugin (also referred to as a CNI or Container Network Interface) is needed.

The network plugin provides networking capabilities to the Pods, such as IP address assignment, DNS resolution and network isolation.

We’ll use Flannel to do this.

Flannel runs a small, single binary agent called flanneld on each host, and is responsible for allocating a subnet lease to each host out of a larger, preconfigured address space. Flannel uses either the Kubernetes API or etcd directly to store the network configuration, the allocated subnets, and any auxiliary data (such as the host's public IP). Packets are forwarded using one of several backend mechanisms including VXLAN and various cloud integrations.

Run the following from the master node.

kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml

That’s it! Our master node is complete and we can begin adding new nodes to the cluster. Remember the join command outputted earlier? We’ll need that now.

Adding New Nodes To The Cluster

To add a new node to the cluster is fairly simple, if you’re adding a lot of nodes you’ll want to probably multiplex your session commands using a tool like tmux.

Complete the “First Boot and Initial Configuration” and work through “Docker & Kubernetes Initial Set Up” stop after the step where you install the different Kubernetes components. At this point you’ll now want to run the kubeadm join command from before, be sure to include options for the cri-socket and node-name.

sudo kubeadm join --token {token} --discovery-token-ca-cert-hash {hash} --cri-socket unix:///var/run/cri-dockerd.sock --node-name {name}

Now monitor your cluster from your master node and ensure all nodes join the cluster.

watch kubectl get nodes
node-0 Ready control-plane 20h v1.26.6
node-1 Ready <none> 19h v1.26.6
node-2 Ready <none> 19h v1.26.6

Your cluster is now ready for use! Although, you’ll probably want to access it from your workstation rather than through SSH. From your computer you can now set up your kube-config from before.

The default kube-config will give you admin privilege's and shouldn’t be shared with other people.

Export the config first to your profile.

export KUBECONFIG=~/.kube/config

Set the context:

kubectl config use-context kubernetes-admin@kubernetes

You should be able to now access your cluster remotely.

> kubectl cluster-info
Kubernetes control plane is running at
CoreDNS is running at

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.


Now to upgrade our cluster from vanilla to awesome. Let’s set up some commonly used tools that’ll let us easily deploy new applications and monitor our cluster. For this, I’ll go over installing ArgoCD, Prometheus and Grafana! Three open-source projects that’ll take our cluster to the next level.

Before coninuing, I recommend creating a remote git repository for tracking all of our configuration changes for these tools. This is particular useful with ArgoCD as we add each tool or additional applications through there for deployment.


For each tool, we’ll use Helm as our resource templater. Install the latest (or at least Helm v3) and lets add the ArgoCD repository.

helm repo add argo https://argoproj.github.io/argo-helm

Create a values file.

serviceType: NodePort
httpNodePort: 30080
httpsNodePort: 30443

This file can be used to override any of the settings from the chart. In this case, I’m changing the Service to run as a NodePort vs. a ClusterIP. This will expose the specified ports from the cluster so that we can access it from our private network without using a reverse proxy.

Install the service.

helm install argocd -n argocd -f values.yaml argo/argocd

You’ll then want to grab the default password for the admin user.

kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d

You should be able to access ArgoCD from any of your nodes by going to the NodeIP:httpsNodePort in your browser, because I use OpenWrt I’m able to set a hostname entry to my cluster and can access the login page at https://cluster.home:30443.

DNS Entry

Login to ArgoCD and we’ll come back to it shortly.

ArgoCD Login


We’ll use Prometheus as our timeseries metric server for gathering information about our cluster.

Before we can install it, we should set up a persistent volume for Prometheus to store query data. As this is just a home cluster, I opted to just using a spare USB drive but you can attach and use whatever you want.

Here are the steps I did to set up the volume on my master node. Create the path for our volume and a back up of our fstab as we’ll need to make changes which might break our boot volume if we make a mistake.

sudo mkdir /mnt/usb
sudo cp /etc/fstab /etc/fstab.bak

Attach the device and then append the fstab with the changes.

/dev/sda1 /mnt/usb vfat defaults,uid=youruid,gid=yourgid,dmask=002,fmask=113 0 0

Now mount the device with our user and group settings for our node user.

sudo mount -o uid=youruid,gid=yourgid,dmask=002,fmask=113 /dev/sdX1 /mnt/usb

We’ll now want to create a Kubernetes resource with our PersistentVolume and PersistentVolumeChain.

apiVersion: v1
kind: PersistentVolume
name: prometheus-usb-pv
type: local
storageClassName: manual
storage: {size of device}Gi
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
path: "/mnt/usb"
apiVersion: v1
kind: PersistentVolumeClaim
name: prometheus-usb-pvc
storageClassName: manual
- ReadWriteOnce
storage: {size of device}Gi

If you’re using a git repo, place these files in a new Helm Chart under the template directory. Follow the next steps to proceed.

Let’s set up our chart with Prometheus.

helm create prometheus

Add the Prometheus subchart as a dependency in the Chart.yaml.

- name: prometheus
version: 22.7.0
repository: https://prometheus-community.github.io/helm-charts

We can now set up configuration to use the new PV and PVC, fix some permissions and make sure we only deploy the server to our master node.

enabled: false
enabled: false
enabled: false
kubernetes.io/hostname: {master node}
runAsUser: {userid}
runAsNonRoot: true
runAsGroup: {groupid}
fsGroup: {fsid}
enabled: true
existingClaim: "prometheus-usb-pvc"
volumeName: "prometheus-usb-pv"

This also disables some extra services like the alertmanager, pushgateway and configmapreload. These can be enabled at another time if needed. Alert Manager would be useful for getting notifications for when things are behaving abnormally.

Back to ArgoCD, lets create a “New App”, name it Prometheus and add your git repo as your source and select a path. You’ll want to do this later for Grafana, so keep them in separate paths.

ArgoCD New App

Select the values file to set the custom settings we’ve created. Then create the App, you’ll need to sync it if you specified to manually sync. This is nice for when you do upgrades and want to release manually otherwise automatic syncing is useful for CD and probably the best option for home projects.

Prometheus Deployment


Similarly to Prometheus, we should start by creating a new Helm chart in our git repo.

helm create grafana

Then add the helm repo.

helm repo add grafana https://grafana.github.io/helm-charts
helm repo update

Update the chart with the repo.

- name: grafana
version: 6.57.4
repository: https://grafana.github.io/helm-charts

Add a values.yaml file.

enabled: true
type: NodePort
nodePort: 30180

Then the same as before, add Grafana through ArgoCD. Sync it and now you should have both running.


Before you can use Grafana you’ll need to get the admin password.

kubectl get secret --namespace monitoring grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo

Login with the admin user and password from the output.


Now to add the Prometheus data source. The Prometheus service URL can be accessed as by http://prometheus-server.monitoring.svc.cluster.local on the cluster where monitoring would be the namespace you deployed it in. Go under “Administration” → “Data sources” → “Add new data source” then add the URL and “Save & Test” to verify.

Add Prometheus Data Source

If we want to get a simple dashboard to view the state of our cluster we can use the dashboard provided Grafana Labs. It should give us a simple view on the resources being used in our cluster.



One of the advantages of doing this vs. running a standalone server is that you can now easily add old computers to your cluster and not worry about what runs where.

Raspberry Pi’s are a good low cost option. For this cluster, I spent about $10 to $20 per year on energy costs.

Writing custom apps for your cluster is as easy as just creating a Dockerfile, a simple Helm chart then adding it to ArgoCD! 💪

That’s all, enjoy your Raspberry Pi cluster! Thanks for reading!

Alexander Sniffin

Software Engineer solving the next big problem one coffee at a time @ alexsniffin.com