03 min reading inDevOpsProgramming

Microk8s; self host Kubernetes on the edge

Setup a Microk8s node on only VPS (updated)

Discover the cheapest and easiest way I've found to create personal Kubernetes clusters.

This blog post provides a step-by-step guide to setting up a private Kubernetes cluster on a Virtual Private Server (VPS), with the following features ready and configured:

  • VPS firewall setup to expose HTTP/HTTPS & kubeapi server
  • MetalLB configured for load balancing and exposing VPS IP
  • Ingress and cert-manager setup for quick & automatic TLS management
  • Kubeapi server configured for restricted remote access

The prerequisites for this tutorial are: A VPS with a Public IP, A domain with a configurable DNS.

Step 1: Create a base user

Connect to your VPS. In a terminal, create a user and add them to the sudoers.

First change the root user password to something strong passwd

adduser <user-name> usermod -aG sudo <user-name>

Step 2: Create SSH keys

On your local machine, create SSH keys to access the server. Set a secure password for the key.

ssh-keygen -b 2048 -t rsa chmod 600 <key-file> <key-file>.pub

Copy the content of <key-name>.pub to your VPS at /home/<user-name>/.ssh/authorized_keys.

Now we edit the ssh config to require the ssh key, edit /etc/ssh/sshd_config

PasswordAuthentication no # yes before PubkeyAuthentication yes # no before PermitRootLogin no # Before yes

Then restart the ssh service service ssh restart

Note: these are minimal ssh setup setups and depending on the environment extra steps should be taken to make ssh more robust agains outside attacks.

Step 3: Set up and enable the firewall

Now disconnect from your server and reconnect as the new user:

ssh -i "<private-server-key>" <user-name>@<server-IP>

Set up the firewall to allow SSH, HTTP, HTTPS connections and also expose port 16443 which will be used to access the Kubeapi server.

sudo ufw allow ssh sudo ufw allow http sudo ufw allow https sudo ufw allow 16443 sudo ufw enable

Step 4: Install and set up microk8s

sudo snap install microk8s --classic --channel=1.35 # Check for current version! sudo microk8s enable rbac # optinal might influence some following steps sudo microk8s enable ingress sudo microk8s enable dns sudo microk8s enable hostpath-storage sudo microk8s enable cert-manager sudo microk8s start

If you want to use the current user to manage microk8s; this is not recomended for production clusters

sudo usermod -a -G microk8s <your-user>

Now we create a default cluster issuer: letsencrypt-prod:

microk8s kubectl apply -f - <<EOF --- apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: letsencrypt-prod spec: acme: email: <your email> server: https://acme-v02.api.letsencrypt.org/directory privateKeySecretRef: name: lets-encrypt-priviate-key solvers: - http01: ingress: class: trafiek EOF

Note that compared to the last guide we are using 'trafiek' as default ingress provider

We also need to enable some additional rules for the firewall (see this github comment):

sudo ufw allow in on cali+ sudo ufw allow out on cali+

To check for general configration issues use microk8s inspect This can give you some easy setup configuration infos and warnings such as:

To enable ip forwarding:

sudo iptables -P FORWARD ACCEPT
sudo apt-get install iptables-persistent

No to konfigure kubeserver access through the host dns, edit /var/snap/microk8s/current/certs/csr.conf.template:

[ alt_names ] DNS.1 = kubernetes DNS.2 = kubernetes.default DNS.3 = kubernetes.default.svc DNS.4 = kubernetes.default.svc.cluster DNS.5 = kubernetes.default.svc.cluster.local DNS.6 = k8s.<your-domain>

Refresh the server certificate:

sudo microk8s refresh-certs --cert server.crt

View the updated client certificate: microk8s config > kubeconfig.yaml. Open the kubeconfig.yaml and edit the server: entry.

... - cluster: certificate-authority-data: XXXXXXXXCENSOREDXXXXXXX server: <pub-IP> <---- put k8s.<domain> here ...

Now you can connect to the Kubernetes cluster from your local machine:

KUBECONFIG="./kubeconfig.yaml" kubectl get pods --all-namespaces

Step 5 (optional) setup tailscale

curl -fsSL https://tailscale.com/install.sh | sh && sudo tailscale up --auth-key=<your-tailscale-auth-key>
  1. Create a tailscale service for that cluster
  2. Serve the new kubernetes node
sudo tailscale serve --bg --service=svc:<your-service-name> --tcp 16433 tcp://127.0.0.1:16433 Serve started and running in the background. To disable the proxy, run: tailscale serve --service=svc:<your-service-name> --tcp=16433 off To remove config for the service, run: tailscale serve clear svc:<your-service-name>
  1. Add the tailscale service IP also to /var/snap/microk8s/current/certs/csr.conf.template ( & sudo microk8s refresh-certs --cert server.crt )
  2. (optinal) close the kubernetes port ufw block 16433

Some Tips

Chore: Check micok8s certificate expiry!

sudo microk8s refresh-certs -c

bad example:

sudo microk8s refresh-certs -c The CA certificate will expire in 2859 days. The server certificate will expire in -19 days. The front proxy client certificate will expire in -19 days.

refresh them:

sudo microk8s refresh-certs -e server.crt sudo microk8s refresh-certs -e front-proxy-client.crt

Enabling tab completion for microk8s

vim ~/.bash_aliases

# add the fllowing alias kubectl='microk8s kubectl' export LC_ALL=en_US.utf-8 export LANG=en_US.utf-8 source /usr/share/bash-completion/bash_completion source <(kubectl completion bash)

Patching External Traffic Policies

kubectl patch svc nodeport -p '{"spec":{"externalTrafficPolicy":"Local"}}'

Testing the preservation of client IP adresses

kubectl expose deployment source-ip-app --name=loadbalancer --port=80 --target-port=8080 --type=LoadBalancer

Conclusion

You now have a Kubernetes cluster with services that can be exposed via ingress, have a load balancer, and cert manager ready. You should be able to access these services through a host domain.

The possibilities are endless from here ;^)

You even have the option to connect this to multiple VPSs to extend your cluster. More on that in a future blog post.


Keep Reading