While my NUC homelab handled what I considered “production” workloads using docker and compose, I wanted to play around with and learn Kubernetes. I understood most of the bird’s eye view concepts like pods, deployments, and ingress, but without much hands-on, most of it was fleeting. So I decided to put my spare RAM and CPU on my desktop to good use by installing a complete K3s cluster. A good part of my research was provided concisely on an excellent guide by Tom .
Installing a cluster needs some platform, either bare metal or virtualized. I use Windows Subsystem for Linux (WSL), so I already had Hyper-V enabled. If you do not, follow the official instructions to do that. Now the task was to install multiple VMs with some OS to use as a platform for the cluster. Ubuntu server provides a clean minimal base, so I could do it the traditional way by downloading the ISO and installing
n VMs and be done. But that can get tedious if I have to start clean while learning and messing things up. Here is where Multipass by Canonical comes in.
Multipass provides a command-line interface to launch, manage, and generally fiddle about with instances of Linux. It is a tool that can spin up cloud instances of Ubuntu taking full advantage of cloud-init, not unlike how you can spin up containers. The result is no more manually loading an ISO to the hypervisor, selecting parameters, sitting through the install process, and answering questions on what your name is and where you are. Multipass supports Hyper-V and Virtualbox on Windows, and I will be using Hyper-V because of WSL.
There are a lot of things that need to work well together for Multipass to function correctly. Hyper-V is a bit of a hassle in this regard. I took multiple installations and information available from Github issues like this to fix some of the Windows issues preventing Multipass daemon from starting up.
Cloud-init is the industry standard multi-distribution method for cross-platform cloud instance initialization. Cloud instances initialize from a disk image and instance data containing cloud metadata, optional user data, and vendor data. Multipass will include enough data to use the multipass CLI to exec into the instance from the command prompt. But if I would like to SSH directly, I would need to include that as a cloud-init configuration.
I created a file called
multipass.yml with the below contents to add the SSH key. In Windows, the key would be in
C:\Users\<user>\.ssh\id_[rsa/ed25519].pub. You can find instructions online on enabling the SSH client in Windows if yours does not have it already. Feel free to replace my username and key and add multiple keys if you have, but note that the VM IPs would only be accessible from your machine and not from the network.
Launch the VMs
With the cloud-init handy, I proceeded to spin up the VMs using the below commands. You can mess around with the CPU, memory, and disk allocated and choose to deploy more VMs or multiple nodes for the control plane as needed.
multipass list then gives the status as below.
At this point, I can ssh into the VMs by using
ssh [email protected], but I do not need to do that.
k3sup is a lightweight utility to get from zero to
KUBECONFIG with k3s on any local or remote VM. All you need is SSH access and the k3sup binary to get kubectl access immediately. Due to the requirement of SSH access, we added the cloud-init while deploying our VMs.
Regarding the installation of
k3sup, it is distributed as a single binary, so I promptly dropped it to my scripts folder which is in my
I loved the simplicity and speed with which I was able to get the cluster up and running, without the need for ansible and hosts files and such. I ran the below commands, which took five minutes at most. The first command installs and initiates the k3s control plane on the master whereas the second and third joins the workers to the cluster.
If you want a multi-master cluster with embedded etcd, it is as simple as running the below commands for the cluster masters. Note the extra
--cluster parameter provided for the first master node and the extra
--server for joining additional master nodes. The github readme has detailed command-line options for
Once the commands finish, it will generate a
kubeconfig file in the current directory. I copied over that file as
C:\Users\<user>\.kube\config and proceeded to use the usual
kubectl commands to talk to the cluster.
At this point, I downloaded
kubectl from here and proceeded to run commands against my brand new Kubernetes cluster! 🎉
Rancher is a lot of things, they have their own kubernetes engine called the RKE and the lightweight k3s distribution we used for the cluster above is also from them. They also have a cluster management platform which we can deploy on our existing kubernetes cluster as shown on their documentation , which I will summarize below on what I ran, for the sake of completeness.
Helm Charts helps us define, install, and upgrade even the most complex Kubernetes applications with ease. Rancher provides a helm chart for the installation as well. Many kubernetes applications are packaged as Helm Charts. So I proceeded to install helm to make use of it.
Download the latest release from Github
, extract and drop it in a directory which is in
PATH and we are off to the races.
Let us proceed to install Rancher using the official helm charts. Rancher also needs
cert-manager which is a Kubernetes addon to automate the management and issuance of TLS certificates from various issuing sources. I installed both of them as shown below.
Substitute the hostname you need in the 4th command.
Once both of them show the rollout status as complete, Rancher is installed in our cluster. For accessing it, we need to use the hostname specified. Since this is a learning environment, I added the below entries to my hosts file at
C:\Windows\System32\drivers\etc\hosts, where the IP is the IP address of the master VM, as shown in
I opened a browser, pointed it to
rancher.adyanth.lan and voilà, I was asked to create an admin account and dropped into the Rancher UI showing all three nodes!
In my next post , I discuss on how I moved a docker application to run on multi node kubernetes cluster.🚀