Local AI

Tools & Frameworks

Ollama, ComfyUI, LM Studio, llama.cpp, Conda, Docker

8 episodes

#2099: One Pi, Two Screens: The Isolation Playbook

Stop your dashboard and Kodi from fighting over the same screen. Here’s how to split one Pi into two reliable workspaces.

diyhome-laboperating-systems

#2040: The AI Inference Engine Rebellion

Why run LLMs locally? We break down Ollama, llama.cpp, vLLM, and llamafile—and when to use each.

local-aiopen-sourceai-inference

#2038: The Self-Hosted AI Agent Buyer’s Guide

LobeHub vs. Dify vs. n8n: We break down the chaotic landscape of local AI agents to find the right "brain" for your workflow.

local-aiai-agentssmart-home

#2019: Local AI vs Cloud AI: The Agent Identity Crisis

Your desktop is becoming a life support system for AI agents. We explore the sharp trade-offs between local-first and cloud-native architectures.

local-aiai-agentsedge-computing

#1870: Building a Sandbox for Agentic AI

Learn how to safely build and test autonomous AI agents using a disposable VPS, Docker containers, and secure networking.

ai-agentslocal-aiedge-computing

#1847: The Home Lab Blackout: Fixing Servers From a Beach

Your server is down and you're miles away. Learn the three simple checks that keep your home lab alive and how to get back in when the front door i...

home-labhardware-engineeringnetwork-security

#1807: Why GPU Containers Force You to Build

Docker promised "run anywhere," but GPU images make you compile for hours. Here’s why the abstraction breaks down.

gpu-accelerationdockerdependency-management

#1754: From Ollama to Agentic CLIs: The Rise of the AI Harness

Explore the evolution from local LLMs to modern agentic CLIs, focusing on the "harness" that gives models context, tools, and autonomy.

local-aiai-agentsrag