Related
CloudNativePG PostgreSQL End-to-End Setup Guide
A full copy/paste-ready walkthrough to install CloudNativePG, deploy PostgreSQL on Kubernetes, bootstrap users and tables, and verify with Python.
Popular topics
04 min reading in—DevOpsKubernetesNetworking
A production-style setup to publish private k3s services through a public microk8s edge cluster using the Tailscale Kubernetes Operator.
I use this pattern when I want to keep workloads private in a k3s cluster, but still expose selected services to the public internet through a separate public microk8s edge cluster.
This is a full, production-style walkthrough for exposing a private service across two clusters with Tailscale Kubernetes Operator.
It is tailored to this exact topology:
kubeconfig-node-A.yamlkubeconfig-node-B.yaml<app-service> in k3s namespace <app-namespace>example.server.comletsencrypt-prod using ACME HTTP-01Goal:
Data path:
https://example.server.com.IngressClass=public) receives request.<app-service>-tailnet service in microk8s.<app-hostname-node-a>.<tailnet>.ts.net.This pattern works for many NAT/internal services, not only one specific app.
example.server.com must point to the public microk8s ingress IP.cert-manager installed and healthy.ClusterIssuer named letsencrypt-prod configured for HTTP-01.Check:
export KUBECONFIG=./kubeconfig-node-B.yaml
kubectl get pods -n cert-manager
kubectl get clusterissuer letsencrypt-prod
In this environment, ingress class is public (not nginx).
Check:
export KUBECONFIG=./kubeconfig-node-B.yaml
kubectl get ingressclass
kubectl -n ingress get pods
Check:
export KUBECONFIG=./kubeconfig-node-A.yaml
kubectl get pods -n tailscale
export KUBECONFIG=./kubeconfig-node-B.yaml
kubectl get pods -n tailscale
Both should show the operator pod running.
You can expose an existing ClusterIP service via annotations.
For service <app-service> in namespace <app-namespace>:
export KUBECONFIG=./kubeconfig-node-A.yaml
kubectl annotate service <app-service> -n <app-namespace> \
tailscale.com/expose=true \
tailscale.com/hostname=<app-hostname-node-a> \
--overwrite
Verify:
export KUBECONFIG=./kubeconfig-node-A.yaml
kubectl -n <app-namespace> get svc <app-service> -o yaml
kubectl -n tailscale get pods,svc,secrets
Expected:
tailscale.com/expose: "true".tailscale namespace (for example ts-<app-service>-xxxxx-0).Get tailnet FQDN from secret:
export KUBECONFIG=./kubeconfig-node-A.yaml
kubectl -n tailscale get secret <proxy-secret-name> -o jsonpath='{.data.device_fqdn}' | base64 -d
Example value:
<app-hostname-node-a>.taild4f875.ts.net.Keep this FQDN. It is the inter-cluster target.
Use Tailscale cluster egress mode in microk8s. This is the critical part.
Do not use a plain ExternalName directly to <name>.ts.net without Tailscale egress annotations. The ingress controller may fail DNS resolution and return 504.
Create app-proxy.yaml in the public cluster context with this content:
apiVersion: v1
kind: Namespace
metadata:
name: <app-namespace>
---
apiVersion: v1
kind: Service
metadata:
name: <app-service>-tailnet
namespace: <app-namespace>
annotations:
tailscale.com/tailnet-fqdn: <app-hostname-node-a>.taild4f875.ts.net
spec:
type: ExternalName
externalName: placeholder
ports:
- name: http
port: 80
targetPort: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: <app-service>-public
namespace: <app-namespace>
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
ingressClassName: public
tls:
- hosts:
- example.server.com
secretName: example-server-com-tls
rules:
- host: example.server.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: <app-service>-tailnet
port:
number: 80
Apply:
export KUBECONFIG=./kubeconfig-node-B.yaml
kubectl apply -f app-proxy.yaml
Why externalName: placeholder is correct:
spec.externalName to an internal service it creates in tailscale namespace.Run these checks in order.
export KUBECONFIG=./kubeconfig-node-B.yaml
kubectl -n <app-namespace> get svc <app-service>-tailnet -o yaml
Expected:
spec.externalName changed from placeholder to something like ts-<app-service>-tailnet-xxxxx.tailscale.svc.cluster.local.status.conditions includes TailscaleProxyReady=True.export KUBECONFIG=./kubeconfig-node-B.yaml
kubectl -n tailscale get pods,svc,secrets
Expected:
ts-<app-service>-tailnet-xxxxx-0 is Running.export KUBECONFIG=./kubeconfig-node-B.yaml
kubectl -n <app-namespace> get ingress <app-service>-public -o wide
kubectl -n <app-namespace> get certificate,certificaterequest,order,challenge
Expected:
public.example-server-com-tls is Ready=True.export KUBECONFIG=./kubeconfig-node-B.yaml
kubectl -n <app-namespace> run curlcheck --image=curlimages/curl:8.8.0 --restart=Never --rm -i --command -- sh -c "curl -sS -o /dev/null -w '%{http_code}\n' http://<app-service>-tailnet.<app-namespace>.svc.cluster.local"
Expected output:
200curl -I https://example.server.com
Expected:
200 (or app-specific expected code), valid TLS chain.504 Gateway Time-outMost common causes:
nginx instead of public in this microk8s setup).ExternalName -> <name>.ts.net without tailscale.com/tailnet-fqdn egress annotation.Check ingress logs:
export KUBECONFIG=./kubeconfig-node-B.yaml
kubectl -n ingress logs <nginx-ingress-controller-pod> --tail=200
If you see repeated DNS name error for tailnet hostname or placeholder, your service wiring is incorrect or still converging.
Check:
export KUBECONFIG=./kubeconfig-node-B.yaml
kubectl -n <app-namespace> get certificate,certificaterequest,order,challenge
kubectl -n <app-namespace> describe ingress <app-service>-public
Common causes:
example.server.com DNS not pointing to public ingress endpoint.cluster-issuer name.Cause:
spec.tls[].secretName mismatched with cert-manager-produced secret.Fix:
Ingress.spec.tls.secretName equals certificate secret (for example example-server-com-tls).Check:
export KUBECONFIG=./kubeconfig-node-B.yaml
kubectl -n tailscale logs deploy/operator --tail=300
If logs show no egress reconciliation for your service, ensure:
ExternalName.tailscale.com/tailnet-fqdn or tailscale.com/tailnet-ip.<app-namespace>).<app1-hostname-node-a>, <app2-hostname-node-a>).ClusterIP only; avoid public LB there.To expose another private k3s service through the same public microk8s cluster:
tailscale.com/expose=true and a unique hostname.ExternalName service with tailscale.com/tailnet-fqdn to that hostname.ingressClassName: public.cert-manager.io/cluster-issuer: letsencrypt-prod.This gives a repeatable, encrypted cross-cluster edge publishing model.
Source cluster (k3s):
export KUBECONFIG=./kubeconfig-node-A.yaml
kubectl -n <app-namespace> get svc <app-service> -o yaml
kubectl -n tailscale get pods,svc,secrets
Public cluster (microk8s):
export KUBECONFIG=./kubeconfig-node-B.yaml
kubectl -n <app-namespace> get svc,ingress
kubectl -n <app-namespace> get certificate,certificaterequest,order,challenge
kubectl -n tailscale get pods,svc,secrets
kubectl -n tailscale logs deploy/operator --tail=200
kubectl -n ingress logs <nginx-ingress-controller-pod> --tail=200
Public endpoint test:
curl -I https://example.server.com
Related
A full copy/paste-ready walkthrough to install CloudNativePG, deploy PostgreSQL on Kubernetes, bootstrap users and tables, and verify with Python.
Related
A hands-on experiment building a self-managed at-home AI cluster with k3s, Ollama, and LiteLLM.
Related
Setup a Microk8s node on only VPS (updated)