Opencode; Usability with Local LLMs on iGPU w 128GB vram: My Tests
Testing and configuring opencode for usage with local llms
Popular topics
Testing and configuring opencode for usage with local llms
Strix Halo 128GB RAM, 100% local LLM agents, my tests
How to self host a Matrix.org server
Tiny guide to deploy Uptime-Kuma on a self-hosted Kubernetes cluster using a maintained Helm chart.
A comprehensive comparison of leading video call service providers focusing on cross-platform compatibility for React-based web and native...
Set up Slack notifications for deployments using GitHub Actions and the Python slack_sdk package, keeping your team updated automatically.
This guide will demonstrate how you can effortlessly host Gitea on a private Kubernetes cluster and utilize it as a package registry.
A step-by-step guide to setting up a fully configurable private Kubernetes cluster
A Game Dev Story–inspired isometric simulation for the Vibrational Network with evolving features from tasks and stats to events and rooms.
A quote from the Party in George Orwell's 1984. Indicators for increasing Surveillance, just a curated reference list, with some comments.