How to set up a local Kubernetes cluster and deploy a self-made microservice in less than 10 minutes

Nowadays, many server applications are not installed and run directly on physical hosts or virtual machines any more. Instead, application code is often built into container images, and run in so-called pods in a Kubernetes cluster. Kubernetes provides a standardized way to orchestrate applications and works the same way no matter where the cluster is running.

Kubernetes clusters are often hosted and managed by cloud providers. They can also be deployed on-premise though, and either be created and managed by a service provider, or with tools like Kubespray. Most clusters are long-lived, and consist of multiple nodes for redundancy.

However, sometimes a cluster that can be created and discarded quickly and easily on a single computer can be very useful:

  • A developer might want to create a cluster on their development machine for playing around with Kubernetes, exploring the newest tools, or testing their newly developed code and Kubernetes resources. If they used a cluster that is shared with others or even with production workloads instead, they might get into the way of others, or worse, break things that should better not break.
  • Another use case is automated testing of application deployments in Kubernetes, or testing of applications that interact with Kubernetes resources themselves. This works best in a dedicated cluster that is set up in a clean state just for this purpose, and thrown away after the tests.

There are a few solutions for setting up a local Kubernetes cluster. I choose k3d here because it is pretty easy to use - it is a single binary that requires only Docker to create its cluster in a container. Unlike kind, which I also like a lot and which follows the same approach, it has an ingress controller built in. This makes accessing the applications that run in the cluster easier, and allows testing of ingress resources. With kind, this can also be done, but only after an ingress controller has been installed.

This post shows how to set up a cluster with k3d, and deploy a self-made application. It does not assume any prior Kubernetes knowledge. If you have some Kubernetes experience, and you are interested in playing around with k3d, feel free to skip most of the text and just look at the commands 🙂 You can also download the Jupyter notebook that this post is based on, and play around with it.

I use Linux to work with k3d and Kubernetes in general, but most of what I do should work pretty much the same way on a Mac. Trying it on Windows might require more modifictions though.

Installing k3d and kubectl¶

We will need not only k3d, but also kubectl, which is the command-line interface for interacting with Kubernetes clusters.

There is a range of installation options. I prefer using arkade, which allows to install many tools that are related to Kubernetes easily.

We will use the arkade installation script as recommended in its README, but run it without root permissions:

In [1]:
curl -sLS https://get.arkade.dev | sh
x86_64
Downloading package https://github.com/alexellis/arkade/releases/download/0.8.28/arkade as /home/frank/code/github/freininghaus/freininghaus.github.io/posts/2022-07-12-local-k8s-cluster-with-k3d/arkade
Download complete.

============================================================
  The script was run as a user who is unable to write
  to /usr/local/bin. To complete the installation the
  following commands may need to be run manually.
============================================================

  sudo cp arkade /usr/local/bin/arkade
  sudo ln -sf /usr/local/bin/arkade /usr/local/bin/ark

We could now move the arkade binary to a directory in our PATH, or re-run the download script with root permissions. But we can also run the binary from the current directory and use it to install the tools that we need:

In [2]:
./arkade get kubectl k3d jq > /dev/null
44.73 MiB / 44.73 MiB [------------------------------------------------] 100.00%
2022/07/12 13:56:22 Copying /tmp/kubectl to /home/frank/.arkade/bin/kubectl
2022/07/12 13:56:23 Looking up version for k3d
2022/07/12 13:56:23 Found: v5.4.4
16.77 MiB / 16.77 MiB [------------------------------------------------] 100.00%
2022/07/12 13:56:24 Looking up version for k3d
2022/07/12 13:56:24 Found: v5.4.4
2022/07/12 13:56:24 Copying /tmp/k3d-linux-amd64 to /home/frank/.arkade/bin/k3d
2022/07/12 13:56:24 Looking up version for jq
2022/07/12 13:56:24 Found: jq-1.6
3.77 MiB / 3.77 MiB [--------------------------------------------------] 100.00%
2022/07/12 13:56:24 Looking up version for jq
2022/07/12 13:56:24 Found: jq-1.6
2022/07/12 13:56:24 Copying /tmp/jq-linux64 to /home/frank/.arkade/bin/jq

We will add the directory where arkade puts the downloaded tools to the PATH for convenience:

In [3]:
export PATH=$PATH:$HOME/.arkade/bin/

Creating a local Kubernetes cluster with k3d¶

Now that we have downloaded k3d, we can use it to create our first local Kubernetes cluster. We will give it the name test-cluster and forward port 80 on the host to port 80 in the container which matches the node filter "loadbalancer". This will enable easy HTTP/HTTPS access to apps running in the cluster via ingress routes, as we will see in the next post.

In [4]:
k3d cluster create test-cluster -p "80:80@loadbalancer"
INFO[0000] portmapping '80:80' targets the loadbalancer: defaulting to [servers:*:proxy agents:*:proxy] 
INFO[0000] Prep: Network                                
INFO[0000] Created network 'k3d-test-cluster'           
INFO[0000] Created image volume k3d-test-cluster-images 
INFO[0000] Starting new tools node...                   
INFO[0000] Starting Node 'k3d-test-cluster-tools'       
INFO[0001] Creating node 'k3d-test-cluster-server-0'    
INFO[0001] Creating LoadBalancer 'k3d-test-cluster-serverlb' 
INFO[0001] Using the k3d-tools node to gather environment information 
INFO[0002] HostIP: using network gateway 172.31.0.1 address 
INFO[0002] Starting cluster 'test-cluster'              
INFO[0002] Starting servers...                          
INFO[0002] Starting Node 'k3d-test-cluster-server-0'    
INFO[0009] All agents already running.                  
INFO[0009] Starting helpers...                          
INFO[0009] Starting Node 'k3d-test-cluster-serverlb'    
INFO[0017] Injecting records for hostAliases (incl. host.k3d.internal) and for 2 network members into CoreDNS configmap... 
INFO[0019] Cluster 'test-cluster' created successfully! 
INFO[0019] You can now use it like this:                
kubectl cluster-info

We can verify that our cluster is running and can be accessed via kubectl:

In [5]:
kubectl cluster-info
Kubernetes control plane is running at https://0.0.0.0:39309
CoreDNS is running at https://0.0.0.0:39309/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://0.0.0.0:39309/api/v1/namespaces/kube-system/services/https:metrics-server:https/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

The cluster has a single node:1

In [6]:
kubectl get nodes
NAME                        STATUS     ROLES                  AGE   VERSION
k3d-test-cluster-server-0   NotReady   control-plane,master   6s    v1.23.8+k3s1

Note that the status of the node is NotReady. We can either wait for a few seconds, or use kubectl wait, which runs until the given condition is met:

In [7]:
kubectl wait --for=condition=ready node k3d-test-cluster-server-0
node/k3d-test-cluster-server-0 condition met
In [8]:
kubectl get nodes
NAME                        STATUS   ROLES                  AGE   VERSION
k3d-test-cluster-server-0   Ready    control-plane,master   9s    v1.23.8+k3s1

Create a simple web service¶

Let's build a simple application that we will deploy and test in our cluster. We will do it in Python with FastAPI (but we could just as well use any other programming language and framework, of course).

The app accepts GET requests at the endpoint /greet/{name}, where a name can be substituted for name. It responds with a small JSON object that contains a greeting, and also the host name. This will be interesting later on when multiple instances of the application are running.

In [9]:
# I like to use https://pygments.org/ for syntax highlighting
alias cat=pygmentize

cat hello-server/hello-app.py
import fastapi
import socket

app = fastapi.FastAPI()


@app.get("/greet/{name}")
def greet(name: str):
    return {
        "data": {
            "message": f"Hello {name}!"
        },
        "info": {
            "hostname": socket.gethostname()
        }
    }

Build Docker image¶

The Dockerfile is quite simple. We just have to copy the Python file into a Python base image and install the dependencies fastapi and uvicorn. The latter is the web server that we will use.

In [10]:
cat hello-server/Dockerfile
FROM python:3.10-slim

COPY hello-app.py ./

RUN pip install fastapi uvicorn[standard] >/dev/null 2>&1

CMD uvicorn hello-app:app --host 0.0.0.0
In [11]:
docker build hello-server/ -t my-hello-server
Sending build context to Docker daemon  6.656kB
Step 1/4 : FROM python:3.10-slim
 ---> 24aa51b1b3e9
Step 2/4 : COPY hello-app.py ./
 ---> 11c437b4a570
Step 3/4 : RUN pip install fastapi uvicorn[standard] >/dev/null 2>&1
 ---> Running in f79cefe640e2
Removing intermediate container f79cefe640e2
 ---> 5b57681458aa
Step 4/4 : CMD uvicorn hello-app:app --host 0.0.0.0
 ---> Running in 51940227c7ac
Removing intermediate container 51940227c7ac
 ---> c1b8d149b4a9
Successfully built c1b8d149b4a9
Successfully tagged my-hello-server:latest

Import the Docker image into the Kubernetes cluster¶

To run our service in the Kubernetes cluster, the nodes in the cluster need access to the image. This can be achieved in several ways: we could either

  • push the image to a public registry,
  • push the image to a private registry, and configure the cluster such that it has access to this registry, or
  • load the image to the nodes in the cluster directly.

Usually, the third option is the easiest when we run tests and experiments on the local development machine, so we will use it here.2

In [12]:
k3d image import my-hello-server --cluster test-cluster
INFO[0000] Importing image(s) into cluster 'test-cluster' 
INFO[0000] Starting new tools node...                   
INFO[0000] Starting Node 'k3d-test-cluster-tools'       
INFO[0001] Saving 1 image(s) from runtime...            
INFO[0005] Importing images into nodes...               
INFO[0005] Importing images from tarball '/k3d/images/k3d-test-cluster-images-20220712134140.tar' into node 'k3d-test-cluster-server-0'... 
INFO[0031] Removing the tarball(s) from image volume... 
INFO[0032] Removing k3d-tools node...                   
INFO[0032] Successfully imported image(s)               
INFO[0032] Successfully imported 1 image(s) into 1 cluster(s) 

Create a namespace for the application¶

Before deploying our application, we will create a new namespace. It will contain all Kubernetes resources that belong to, or interact with, our application.

In [13]:
kubectl create namespace test
namespace/test created

While not stricly necessary, putting each independent application into its own namespace is a good practice. Advantages of this approach include:

  • If anything is seriously wrong with the Kubernetes resources of an application, its namespace can be deleted, and one can start from scratch without affecting other applications running in the Kubernetes cluster.
  • One can create a service account that has only the permissions to view, modify, or create resources in one namespace. Such an account can be used for automated interactions with the cluster, e.g., in continuous integration pipelines. Any accidental effects on applications running in other namespaces can then be avoided.
  • Resource quotas can be assigned to namespaces to limit the amount of resources that the applications in a namespace can use.

Now we would like to deploy our application in the new namespace.

In Kubernetes, applications run in pods¶

In Kubernetes, the "smallest deployable units of computing that you can create and manage" are pods. A pod is essentially a set of one or more containers which can share some resources, like network and storage. In our case, a single container, which runs the image that we have just built and uploaded to the cluster, is sufficient.

Kubernetes resources are usually defined in YAML files, although it is possible to create some types of resources with kubectl create directly on the command line.3 A pod that runs our application, and that lives in the namespace test, could be defined like this:

In [14]:
cat k8s-resources/pod.yaml
apiVersion: v1
kind: Pod
metadata:
  namespace: test
  name: hello
  labels:
    app: hello
spec:
  containers:
  - name: hello-server
    image: my-hello-server
    imagePullPolicy: IfNotPresent
    ports:
    - containerPort: 8000

Each Kubernetes resource definition has a number of top-level keys:

  • The apiVersion tells what version the resource comes from.
  • The kind tells what kind of resource is defined.
  • The metadata include the name of the resource, the namespace that it belongs to, and labels. Labels can be used for multiple purposes, some of which will will se later.
  • The spec defines the properties of the resource. In the case of a pod, this includes the containers that the pod consists of. Note that we have to set the imagePullPolicy to IfNotPresent here. Otherwise, Kubernetes would try to pull the image, which does not work unless it has been pushed to a registry that the cluster can access.

We could now create the pod in the cluster with kubectl apply -f k8s-resources/pod.yaml. However, the more common approach is to create Kubernetes resources which control the creation of pods, such as, e.g., deployments, jobs, or daemon sets.

Define a deployment that controls the pods running an application¶

Here will will use a deployment, which ensures that a certain number of pods of a given type are running in the cluster.4 Having more than one pod of a certain type, e.g., a web service that answers incoming requests, can have a number of advantages:

  • Distributing the incoming traffic over multiple pods can be beneficial because a single pod might not be able to handle a sufficient number of simultaneous requests.
  • It helps to improve the reliability of the system: if a single pod terminates for whatever reason, the other pods can take over request handling immediately.

If a pod terminates, e.g., because of a crash in the application or because the node on which it is running is shut down, a new pod is created automatically, possibly on another node. Moreover, deployments can do other useful things concerning the life cycle of pods. For example, updates of the image version or other parts of the pod spec can be done with zero downtime. The deployment ensures that pods are created and destroyed dynamically at a controlled rate.

Note that this may not work well if a pod needs a lot of internal state to do its work. Kubernetes works best with stateless applications.5

Let's see what a deployment looks like:

In [15]:
cat k8s-resources/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: test
  name: hello
spec:
  replicas: 3
  selector:
    matchLabels:
      app: hello
  template:
    metadata:
      labels:
        app: hello
    spec:
      containers:
      - name: hello-server
        image: my-hello-server
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 8000

The spec of the deployent describes the pods that it should control:

  • replicas: 3 tells Kubernetes that three pods should be running at all times.
  • The template describes what each pod should look like. Note that this object looks a lot like the definition of the plain pod that we saw earlier.
  • The selector describes how the deployment finds the pods that it controls. The matchLabels are compared with the labels in the metadata of all pods for this purpose.

To create the deployment in the cluster, we use this command:6

In [16]:
kubectl apply -f k8s-resources/deployment.yaml
deployment.apps/hello created

If we look at the list of pods in our namespace now, we get this result:

In [17]:
kubectl -n test get pod -o wide
NAME                     READY   STATUS              RESTARTS   AGE   IP       NODE                        NOMINATED NODE   READINESS GATES
hello-6ddc49b4d4-vzzc8   0/1     ContainerCreating   0          0s    <none>   k3d-test-cluster-server-0   <none>           <none>
hello-6ddc49b4d4-5vdwc   0/1     ContainerCreating   0          0s    <none>   k3d-test-cluster-server-0   <none>           <none>
hello-6ddc49b4d4-dlm7w   0/1     ContainerCreating   0          0s    <none>   k3d-test-cluster-server-0   <none>           <none>

There are three pods because we set the number of replicas to three in the definition of the deployment. Moreover, all pods have the status ContainerCreating, so they are not active yet.

Now we can either wait for a few seconds, or use kubectl rollout status to wait until all pods are running:

In [18]:
kubectl -n test rollout status deployment/hello
Waiting for deployment "hello" rollout to finish: 0 of 3 updated replicas are available...
Waiting for deployment "hello" rollout to finish: 1 of 3 updated replicas are available...
Waiting for deployment "hello" rollout to finish: 2 of 3 updated replicas are available...
deployment "hello" successfully rolled out

Now all pods are running. We can also see that each pod got its own IP address:

In [19]:
kubectl -n test get pod -o wide
NAME                     READY   STATUS    RESTARTS   AGE   IP           NODE                        NOMINATED NODE   READINESS GATES
hello-6ddc49b4d4-5vdwc   1/1     Running   0          3s    10.42.0.9    k3d-test-cluster-server-0   <none>           <none>
hello-6ddc49b4d4-vzzc8   1/1     Running   0          3s    10.42.0.11   k3d-test-cluster-server-0   <none>           <none>
hello-6ddc49b4d4-dlm7w   1/1     Running   0          3s    10.42.0.10   k3d-test-cluster-server-0   <none>           <none>

Accessing the pods via HTTP within the Kubernetes cluster¶

The IP addresses which Kubernetes assigns to the pods are not reachable from outside the cluster. We will consider how to use a Kubernetes service and an ingress to make our application accessible for the outside world in the next post.

For the time being, we can use those IP addresses to connect to the pods from other pods in the cluster though, as we will show next.

To get the IP address of one of the pods, we could look at the output of kubectl -n test get pod -o wide above. We could also parse the output with shell commands to assign the IP to a variable. However, kubectl also offers other output formats which are easier to process:

  • -o json outputs a JSON object that contains all information about pods.
  • -o jsonpath=... allows to specify a path to the information we need inside the JSON object, and prints just that.

By looking for IP addresses in the JSON output (which I will not show here because it is rather lengthy), we conclude that the IP address of the first pod in the list can be obtained like this:

In [20]:
first_pod_ip=$(kubectl -n test get pods -o jsonpath='{.items[0].status.podIPs[].ip}')
echo $first_pod_ip
10.42.0.9

To verify that the application can be reached at this address from within the cluster, we create a new pod, which serves as the client that accesses our application. This can be done like this:

In [21]:
kubectl run --rm --attach -q --image=busybox --restart Never my-test-pod -- wget $first_pod_ip:8000/greet/Frank -q -O - | jq .
{
  "data": {
    "message": "Hello Frank!"
  },
  "info": {
    "hostname": "hello-6ddc49b4d4-5vdwc"
  }
}

We have sucessfully made an HTTP request to our application! Note that the hostname in the output is indeed the name of the first pod, whose IP address we have used here.

The options to kubectl run have the following meaning:

  • --rm ensures that the pod is deleted after it exits, such that it no longer occupies resources in the cluster.
  • --attach attaches to the process in the pod, such that we can see its output in the terminal.
  • -q suppresses output from kubectl - we are only interested in output from wget.
  • --image=busybox sets the image that the single container in the pod will use. All we need is a way to make HTTP requests from the pod, so we will use the busybox image, which contains a variant of wget.
  • --restart Never prevents that Kubernetes restarts the pod after termination.7
  • my-test-pod is the name of the pod. This can be any name that is not taken yet in the namespace. Note that we don't use a namespace argument here, so our pod will be created in the default namespace.

Using pod IPs to access our application has a number of downsides though. This post is already a bit long though, so we will discuss them and look at the solutions that Kubernetes provides for this issue, namely, services and ingresses, in a future post.

Deleting the cluster¶

For the time being, we are done with our experiments. We could stop the cluster now with k3d cluster stop and restart it later with k3d cluster start. Everything in the cluster is easy to restore though, so we will delete it:

In [22]:
k3d cluster delete test-cluster
INFO[0000] Deleting cluster 'test-cluster'              
INFO[0002] Deleting cluster network 'k3d-test-cluster'  
INFO[0003] Deleting 2 attached volumes...               
WARN[0003] Failed to delete volume 'k3d-test-cluster-images' of cluster 'test-cluster': failed to find volume 'k3d-test-cluster-images': Error: No such volume: k3d-test-cluster-images -> Try to delete it manually 
INFO[0003] Removing cluster details from default kubeconfig... 
INFO[0003] Removing standalone kubeconfig file (if there is one)... 
INFO[0003] Successfully deleted cluster test-cluster!   

Note that all resources are removed despite the warning about the failed deletion of the Docker volume.

Summary¶

In this post, we created a local single-node Kubernetes cluster with k3d. Moreover, we deployed a small self-made application in the cluster, and connected to this application via HTTP from within the cluster.

In the next post, we will investigate how to access the application from outside the cluster.


  1. If we wanted more server or worker nodes, we could specify the desired numbers with the --agents and --servers options to k3d cluster create, respectively. ↩

  2. There is one thing to keep in mind though: by default, Kubernetes will try to pull the image from a registry even if it is available on the node running the application, and therefore, the application will fail to run. We will see in a minute how this can be prevented by setting a suitable imagePullPolicy for the pods using the image. ↩

  3. We have already created a Kubernetes resource on the command line with kubectl create: the namespace for our application, which we have created with kubectl create namespace test. We could also have achieved this using a YAML document with the content below: ↩

apiVersion: v1
kind: Namespace
metadata:
  name: test
  1. To be precise, the deployment does not control the number of pods directly. It achieves this with a replica set which is a lower-level object that has controlling the number of pods as its sole purpose. The deployment adds other useful functionality, such as, e.g., updating image versions. ↩

  2. It is possible to work with stateful applications in Kubernetes though. Stateful sets cah help with that. ↩

  3. Note that kubectl apply is not only useful for creating deployments and other resources. The same command can be used to make changes to the resource. A common example would be to modify a deployment such that the image version is updated. ↩

  4. Restarting terminated pods is the default behavior because most applications running in Kubernetes clusters are services which should always be up. ↩

Comments