Kubernetes is a powerful container management platform that can be used to run multiple Kubernetes clusters on your development machine. This article will show you how to use K3s to run a Kubernetes cluster on your development machine. First, you need to install the Kubernetes toolchain and libraries. You can find the installation instructions here. Next, you need to create a new namespace and set up an initial deployment: kubectl create -n my-new-namespace my-new-cluster kubectl get nodes -o NAME=my-new-node kubectl get services -o NAME=my-new-service kubectl get pods -o NAME=my-new-pod kubectl get objects -o NAME=my-new-object 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222


K3s is a lightweight Kubernetes distribution ideal for development use. It’s now part of the Cloud Native Computing Foundation (CNCF) but was originally developed by Rancher.

K3s ships as a single binary with a filesize under 50 MB. Despite its diminutive appearance, K3s includes everything you need to run a production-ready Kubernetes cluster. The project focuses on resource-constrained hardware where reliability and ease of maintenance are key concerns. While K3s is now commonly found at the edge on IoT devices, these qualities also make it a good contender for local use by developers.

Getting Started With K3s

Running the K3s binary will start a Kubernetes cluster on the host machine. The main K3s process starts and manages all the Kubernetes components, including the control plane’s API server, a Kubelet worker instance, and the containerd container runtime.

In practice you’ll usually want K3s to start automatically as a service. It’s recommended you use the official installation script to quickly get K3s running on your system. This will download the binary, move it into your path, and register a systemd or openrc service as appropriate for your system. K3s will be configured to automatically restart after its process crashes or your host reboots.

Confirm the installation succeeded by checking the status of the k3s service:

You’re ready to start using your cluster if active (running) is displayed in green.

Interacting With Your Cluster

K3s bundles Kubectl if you install it using the provided script. It’s nested under the k3s command:

You might receive an error that looks like this:

You can fix this by adjusting the file permissions of the referenced path:

Now you should be able to run Kubectl commands without using sudo.

You can keep using a standalone Kubectl installation if you don’t want to rely on K3s’ integrated version. Use the KUBECONFIG environment variable or –kubeconfig flag to reference your K3s configuration file when running the bare kubectl command:

An Example Workload

You can test your cluster by adding a simple deployment:

Use Kubectl to discover the IP address of the service that’s been created:

In this example, the NGINX service is accessible at 10.43.49.20. Visit this URL in your web browser to see the default NGINX landing page.

Setting Kubernetes Options

You can set custom arguments for individual Kubernetes components when you run K3s. Values should be supplied as command-line flags to the K3s binary. Environment variables are also supported but the conversion from flag to variable name is not always consistent.

Here are some commonly used flags for configuring your installation:

–etcd-arg – Pass an argument through to Etcd. –kube-apiserver-arg – Pass an argument through to the Kubernetes API server. –kube-controller-manager-arg – Pass an argument to the Kubernetes Controller Manager component. –kubelet-arg – Pass an argument to the Kubelet worker process.

Many other options are available to customize the operation of K3s and your Kubernetes cluster. These include facilities for disabling bundled components such as the Traefik Ingress controller (–disable traefik) so you can replace them with alternative implementations.

Besides flags and variables, K3s also supports a YAML config file that’s much more maintainable. Deposit this at /etc/rancher/k3s/config.yaml to have K3s automatically use it each time it starts. The field names should be CLI arguments stripped of their – prefix.

Multi-Node Clusters

K3s has full support for multi-node clusters. You can add nodes to your cluster by setting the K3S_URL and K3S_TOKEN environment variables before you run the installation script.

This script will install K3s and configure it as a worker node that connects to the IP address 192.168.0.1. To find your token, copy the value of the /var/lib/rancher/k3s/server/node-token file from the machine which is running your K3s server.

Using Images In Private Registries

K3s has good integrated support for images in private registries. You can provide a special config file to inject registry credentials into your cluster. These credentials will be read when the K3s server starts. It’ll automatically share them with your worker nodes.

Create an /etc/rancher/k3s/registries.yaml file with the following content:

This will let your cluster pull images such as example-registry.com/example-image:latest from the server at example-registry.com:5000. You can specify multiple URLs under the endpoint field; they’ll be used as fallbacks in the written order until a successful pull occurs.

Supply user credentials for your registries using the following syntax:

Credentials are defined on a per-endpoint basis. Registries defined with multiple endpoints need individual entries in the config field for each one.

Endpoints that use SSL need to be assigned a TLS configuration too:

Set the cert_file, key_file, and ca_file fields to reference the correct certificate files for your registry.

Upgrading Your Cluster

You can upgrade to new K3s releases by running the latest version of the installation script. This will automatically detect your existing cluster and migrate it to the new version.

If you customized your cluster by setting installer environment variables, repeat them when you run the upgrade command:

Multi-node clusters are upgraded using the same procedure. You should upgrade each worker node individually, after the server’s running the new release.

You can install a specific Kubernetes version by setting the INSTALL_K3S_VERSION variable before you run the script:

The INSTALL_K3S_CHANNEL version can select unstable versions and pre-release builds:

K3s will default to running the newest stable Kubernetes release when these variables aren’t set.

Uninstalling K3s

As K3s is packaged as a self-contained binary, it’s easy to clean up if you want to stop using it. The install process provides an uninstall script that will remove system services, delete the binary, and clear all the data created by your cluster.

You should use the script at /usr/local/bin/k3s-agent-uninstall.sh instead when you’re decommissioning a K3s worker node.

Conclusion

K3s is a single-binary Kubernetes distribution which is light on system resources and easy to maintain. This doesn’t come at the expense of capabilities: K3s is billed as production-ready and has full support for the Kubernetes API objects, persistent storage, and load balanced networking.

K3s is a good alternative to other developer-oriented Kubernetes flavors such as Minikube and MicroK8s. You don’t need to run virtual machines, install other software, or perform any advanced configuration to set up your cluster. It’s particularly well-suited when you’re already running K3s in production, letting you iron out disparities between your environments.