Related
Building a private LLM Cluster
A hands-on experiment building a self-managed at-home AI cluster with k3s, Ollama, and LiteLLM.
Popular topics
I needed a PostgreSQL setup on Kubernetes that is:
For this, CloudNativePG Helm Chart is one of the cleanest approaches I've found so far ( after the Bitnami Postgsql Helm Chart being moved closed source ).
So this is the exact setup I currently use as a base template. Everything below is copy/paste-ready and end-to-end reproducible.
app-dbapp-postgresappappuserapppassreadonlyreadonlypasspublic.itemsexport KUBECONFIG=./kubeconfig.yaml
kubectl get nodes
helm repo add cnpg https://cloudnative-pg.github.io/charts
helm repo update
helm upgrade --install cnpg cnpg/cloudnative-pg --namespace cnpg-system --create-namespace
kubectl get pods -n cnpg-system
Wait until the operator pod is Running before continuing.
mkdir -p cnpg-app-db
cat > cnpg-app-db/namespace.yaml <<'EOF'
apiVersion: v1
kind: Namespace
metadata:
name: app-db
EOF
cat > cnpg-app-db/db-secret.yaml <<'EOF'
apiVersion: v1
kind: Secret
metadata:
name: app-db-user
namespace: app-db
type: kubernetes.io/basic-auth
stringData:
username: appuser
password: apppass
EOF
cat > cnpg-app-db/postgres-cluster.yaml <<'EOF'
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: app-postgres
namespace: app-db
spec:
instances: 2
imageName: ghcr.io/cloudnative-pg/postgresql:17
storage:
size: 10Gi
bootstrap:
initdb:
database: app
owner: appuser
secret:
name: app-db-user
postInitApplicationSQL:
- CREATE USER readonly WITH PASSWORD 'readonlypass';
- GRANT CONNECT ON DATABASE app TO readonly;
- CREATE TABLE IF NOT EXISTS public.items (id SERIAL PRIMARY KEY, name TEXT NOT NULL, created_at TIMESTAMPTZ DEFAULT NOW());
- ALTER TABLE public.items OWNER TO appuser;
- GRANT ALL PRIVILEGES ON TABLE public.items TO appuser;
- INSERT INTO public.items (name) VALUES ('seed-row-from-bootstrap');
- GRANT USAGE ON SCHEMA public TO readonly;
- GRANT SELECT ON ALL TABLES IN SCHEMA public TO readonly;
- ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO readonly;
EOF
cat > cnpg-app-db/test_db_query.py <<'EOF'
#!/usr/bin/env python3
import argparse
import os
import sys
from urllib.parse import quote_plus
import psycopg
def build_dsn(args: argparse.Namespace) -> str:
if args.dsn:
return args.dsn
user = quote_plus(args.user)
password = quote_plus(args.password)
host = args.host
port = args.port
dbname = args.database
return f"postgresql://{user}:{password}@{host}:{port}/{dbname}"
def main() -> int:
parser = argparse.ArgumentParser(description="Run a test query against CloudNativePG")
parser.add_argument("--host", default=os.getenv("PGHOST", "127.0.0.1"), help="PostgreSQL host")
parser.add_argument("--port", default=os.getenv("PGPORT", "5432"), help="PostgreSQL port")
parser.add_argument("--database", default=os.getenv("PGDATABASE", "app"), help="Database name")
parser.add_argument("--user", default=os.getenv("PGUSER", "appuser"), help="Database user")
parser.add_argument(
"--password",
default=os.getenv("PGPASSWORD", "apppass"),
help="Database password",
)
parser.add_argument(
"--dsn",
default=os.getenv("DATABASE_URL"),
help="Full PostgreSQL DSN. Overrides host/user/password options when set.",
)
args = parser.parse_args()
dsn = build_dsn(args)
try:
with psycopg.connect(dsn) as conn:
with conn.cursor() as cur:
cur.execute("SELECT current_user, current_database(), version()")
user_name, db_name, version = cur.fetchone()
print(f"connected_as={user_name}")
print(f"database={db_name}")
print(f"version={version}")
cur.execute("SELECT id, name, created_at FROM public.items ORDER BY id")
rows = cur.fetchall()
print(f"items_count={len(rows)}")
for row in rows:
print(f"item id={row[0]} name={row[1]} created_at={row[2]}")
except Exception as exc:
print(f"connection/query failed: {exc}", file=sys.stderr)
return 1
return 0
if __name__ == "__main__":
raise SystemExit(main())
EOF
If you've tested a few times and want a clean restart, run the block below. This recreates the setup from the manifests above.
kubectl delete cluster app-postgres -n app-db --ignore-not-found=true
kubectl delete pvc -n app-db --all --ignore-not-found=true
kubectl delete secret app-db-user -n app-db --ignore-not-found=true
kubectl apply -f cnpg-app-db/namespace.yaml
kubectl apply -f cnpg-app-db/db-secret.yaml
kubectl apply -f cnpg-app-db/postgres-cluster.yaml
kubectl wait --for=condition=Ready cluster/app-postgres -n app-db --timeout=600s
kubectl get cluster -n app-db
kubectl get pods -n app-db
kubectl get svc -n app-db
Expected result:
app-postgres shows healthy and ready instancesapp-postgres-1 and app-postgres-2 are Runningapp-postgres-rw, app-postgres-ro, app-postgres-rpsql test in the podkubectl exec -it -n app-db app-postgres-1 -- psql -U appuser -d app
Then run:
SELECT current_user, current_database();
SELECT * FROM public.items;
\q
psycopg)python3 -m venv .venv
source .venv/bin/activate
pip install psycopg[binary]
appuser) via RW serviceTerminal 1:
kubectl port-forward -n app-db svc/app-postgres-rw 5432:5432
Terminal 2:
source .venv/bin/activate
python cnpg-app-db/test_db_query.py --host 127.0.0.1 --port 5432 --database app --user appuser --password apppass
readonly) via RO serviceTerminal 1:
kubectl port-forward -n app-db svc/app-postgres-ro 5433:5432
Terminal 2:
source .venv/bin/activate
python cnpg-app-db/test_db_query.py --host 127.0.0.1 --port 5433 --database app --user readonly --password readonlypass
postgresql://appuser:apppass@app-postgres-rw.app-db.svc.cluster.local:5432/app
postgresql://readonly:readonlypass@app-postgres-ro.app-db.svc.cluster.local:5432/app
postgresql://appuser:apppass@127.0.0.1:5432/app
postgresql://readonly:readonlypass@127.0.0.1:5433/app
rw and ro service usage early helps avoid accidental write paths later in app code.That's it. From here you have a reproducible CloudNativePG Postgres setup with:
If you want to keep using your domain-based access (test-db.<your-domain>.com) with stronger trust setup across clients, a DNS-01 cert-manager flow is a good optional add-on.
At minimum:
ClusterIssuerI still keep this optional here, because CNPG already gives working Postgres TLS out of the box.
If you have tailscale operator on both clusters, you can expose this DB quickly via tailnet and consume it from another cluster using a simple ExternalName service.
High-level mini flow:
tailscale.com/expose=truetest-db.<tailnet>.ts.net)ExternalName -> that hostnameUseful when you just want quick private cross-cluster connectivity without extra public routing work.
Cheers Tim
Related
A hands-on experiment building a self-managed at-home AI cluster with k3s, Ollama, and LiteLLM.
Related
Setup a Microk8s node on only VPS (updated)
Related
A comprehensive guide to setting up fully self-hosted AI code editing with Codium and Continue.dev, keeping your code and AI interactions...