Opencode; Usability with Local LLMs on iGPU w 128GB vram: My Tests
Testing and configuring opencode for usage with local llms
Popular topics
Testing and configuring opencode for usage with local llms
Strix Halo 128GB RAM, 100% local LLM agents, my tests
How to self host a Matrix.org server
Tiny guide to deploy Uptime-Kuma on a self-hosted Kubernetes cluster using a maintained Helm chart.
A comprehensive comparison of leading video call service providers focusing on cross-platform compatibility for React-based web and native...
Set up Slack notifications for deployments using GitHub Actions and the Python slack_sdk package, keeping your team updated automatically.
Select a category to see more related content
Testing and configuring opencode for usage with local llms
Strix Halo 128GB RAM, 100% local LLM agents, my tests
A comparison of Antigravity, Cursor, Windsurf, and Codium + Continue for agentic coding tasks.
A comprehensive guide to setting up fully self-hosted AI code editing with Codium and Continue.dev, keeping your code and AI interactions...
How to self host a Matrix.org server
How Open-Chats Federation Enables anybody to host anything anywhere