← All Tags

#model-collapse

8 episodes

#2651: AI Training Itself: Student, Teacher, and Grader

Can models generate their own training data and judge their own outputs? The promise and pitfalls of fully AI-led pipelines.

large-language-modelsai-trainingmodel-collapse

#2516: How to Actually Diagnose and Fix Overfitting

Overfitting isn't binary. Learn the real triggers, the bias-variance tradeoff, and modern techniques to prevent it.

fine-tuningtraining-datamodel-collapse

#2483: Generating Synthetic Data Without PII Risk

How to generate realistic synthetic voice notes and calendar data with zero PII exposure risk.

small-language-modelsprivacymodel-collapse

#2366: Why LLMs Forget the Middle of Long Conversations

Why do large language models struggle with the middle of long conversations? Explore the science behind attention dilution and practical fixes.

transformerscontext-windowmodel-collapse

#1501: The AI Long Tail: How Small Models Outsmart the Giants

Discover why 31B models are outperforming GPT-5.4 in reasoning and how the AI "long tail" provides the key to local sovereignty and accuracy.

small-language-modelsai-reasoningmodel-collapse

#92: Is AI Eating Its Own Trash?

Is brute force the only path to AGI? Corn and Herman explore the limits of scaling, the risk of model collapse, and the future of world models.

large-language-modelsmodel-collapseneuro-symbolic-ai

#83: Echoes in the Machine: When AI Talks to Itself

What happens when two AIs talk forever with no human input? Herman and Corn explore the weird world of digital feedback loops.

model-collapsesemantic-bleachingai-conversationsdigital-feedback-loopsai-safety

#68: The Looming Digital Ice Age: AI Eating Itself?

Is AI eating itself? Explore the "model collapse" and the "Hapsburg AI problem" before our digital world speaks only gibberish.

model-collapseai-safetydigital-ice-agehapsburg-ai-problemai-training-data