Tools & Frameworks
Ollama, ComfyUI, LM Studio, llama.cpp, Conda, Docker
8 episodes
#2099: One Pi, Two Screens: The Isolation Playbook
Stop your dashboard and Kodi from fighting over the same screen. Here’s how to split one Pi into two reliable workspaces.
#2040: The AI Inference Engine Rebellion
Why run LLMs locally? We break down Ollama, llama.cpp, vLLM, and llamafile—and when to use each.
#2038: The Self-Hosted AI Agent Buyer’s Guide
LobeHub vs. Dify vs. n8n: We break down the chaotic landscape of local AI agents to find the right "brain" for your workflow.
#2019: Local AI vs Cloud AI: The Agent Identity Crisis
Your desktop is becoming a life support system for AI agents. We explore the sharp trade-offs between local-first and cloud-native architectures.
#1870: Building a Sandbox for Agentic AI
Learn how to safely build and test autonomous AI agents using a disposable VPS, Docker containers, and secure networking.
#1847: The Home Lab Blackout: Fixing Servers From a Beach
Your server is down and you're miles away. Learn the three simple checks that keep your home lab alive and how to get back in when the front door i...
#1807: Why GPU Containers Force You to Build
Docker promised "run anywhere," but GPU images make you compile for hours. Here’s why the abstraction breaks down.
#1754: From Ollama to Agentic CLIs: The Rise of the AI Harness
Explore the evolution from local LLMs to modern agentic CLIs, focusing on the "harness" that gives models context, tools, and autonomy.