← All Tags

#small-language-models

14 episodes

#2495: How to Bake Personality Into an LLM in 15 Minutes

Fine-tune a model's personality with ~300 examples and a consumer GPU. SFT + DPO explained.

fine-tuningsmall-language-modelsgpu-acceleration

#2483: Generating Synthetic Data Without PII Risk

How to generate realistic synthetic voice notes and calendar data with zero PII exposure risk.

small-language-modelsprivacymodel-collapse

#2440: Build Your Own CRM With AI Agents

Off-the-shelf CRMs are built for sales teams, not solo operators. Here's why building your own with AI might be smarter.

ai-agentsdiysmall-language-models

#2357: AI Model Spotlight: ** Phi (umbrella brand); individual models: Phi-1, Phi-1.5, Phi-2, Phi-3, Phi-3.5, Phi-4, Phi-4-mini, Phi-4-multimodal

Explore Microsoft AI's Phi family of small language models, designed for edge deployment and high efficiency.

small-language-modelsedge-computingbenchmarks

#1808: The 82M Parameter Voice That Beat Billion-Dollar AI

How a model the size of a tweet outperforms billion-dollar giants in the race for perfect AI speech.

open-source-aismall-language-modelstext-to-speech

#1705: Microsoft's Small Models, Big Play

Microsoft is pushing small language models like Phi for agentic AI. Here’s why that strategy matters for speed, cost, and edge computing.

small-language-modelsai-agentsedge-computing

#1631: Agent Interview: Xiaomi MiMo two Flash

Meet the "budget king" of AI: Bernard, the Xiaomi model claiming he can out-hustle Google for a fraction of the cost.

ai-agentslocal-aismall-language-models

#1610: Mistral AI: Europe’s High-Stakes Play for AI Sovereignty

Explore how Mistral AI is challenging Silicon Valley with efficient models, strategic partnerships, and the new Voxtral voice model.

sovereign-aidata-sovereigntysmall-language-models

#1559: Dark Knowledge: The Art of AI Model Distillation

Discover how model distillation transfers "dark knowledge" from massive AI giants into tiny, efficient models that live in your pocket.

small-language-modelsquantizationfine-tuning

#1558: The Slop Reckoning: Why Smaller AI Models are Winning

Why use a nuclear reactor to toast a bagel? Discover why specialized, "sovereign" AI models are outperforming the giants in precision.

small-language-modelssovereign-aitokenization

#1501: The AI Long Tail: How Small Models Outsmart the Giants

Discover why 31B models are outperforming GPT-5.4 in reasoning and how the AI "long tail" provides the key to local sovereignty and accuracy.

small-language-modelsai-reasoningmodel-collapse

#869: Why Tiny Digital Savants Are Outperforming God-Models

Are massive AI models hitting a wall? Discover why the future belongs to lean, domain-specific "digital savants" and vertical pre-training.

small-language-modelsragfine-tuningai-orchestration2026

#857: The End of the Shift Key: Real-Time AI Writing Buffers

Can local AI fix your messy typing in real-time? Explore the tech behind "transparent buffers" that turn sloppy drafts into polished prose.

small-language-modelslocal-inferencehuman-computer-interactionlatencydigital-privacy

#39: SLMs: Precision Power Beyond LLMs

Forget LLMs. Discover SLMs: the specialized, efficient AI powerhouses transforming workflows, from planning to edge devices.

small-language-modelslocal-aiprivacy