#ai-training
15 episodes
#2196: The Annotation Economy: Who Labels AI's Training Data
Annotation is the invisible foundation of AI—and a $17B industry by 2030. Here's what dataset curators actually need to know about the tools, platf...
#2188: Is Emergence Real or Just Bad Metrics?
The debate over whether AI models exhibit genuine emergent abilities or just appear to because of how we measure them—and why it matters for safety...
#2187: Why Claude Writes Like a Person (and Gemini Doesn't)
Claude produces prose that sounds human. Gemini reads like Wikipedia. The difference isn't capability—it's how they were trained to think about wri...
#2092: Why AI Thinks You're American (Even When You're Not)
Even when we tell Gemini we're in Jerusalem, it defaults to US-centric assumptions. We explore the root causes of this persistent AI bias.
#2064: Why GPT-5 Is Stuck: The Data Wall Explained
The "bigger is better" era of AI is over. Here's why the industry hit a data wall and shifted to a new scaling law.
#2063: That $500M Chatbot Is Just a Base Model
That polite chatbot? It started as a raw, chaotic autocomplete engine costing half a billion dollars to build.
#2016: Andrej Karpathy: The Bob Ross of Deep Learning
Why the most influential AI mind prefers a blank text file to proprietary black boxes.
#1882: The $8B Human Cost of AI Data
AI isn't free—it costs billions for humans to label data. See why annotation is the real engine behind models like Gemini.
#1781: Writing Tests Before Code Is Insane (Until You Try It)
Why testing feels like a tax, how it actually speeds you up, and the simple three-step method to start today.
#608: The RAMpocalypse: Why AI is Starving Your PC
Why is a 32GB RAM kit now $400? Herman and Corn dive into how OpenAI is gobbling up 40% of the world's memory supply for its "Stargate" project.
#584: Will AI Brain Drain Kill the Modern University?
Can AI actually do math research? Herman and Corn dive into DeepMind’s Alithia agent and the shift toward "System 2" thinking in AI.
#551: The LoRA Revolution: Training AI for Personal Perspective
Discover how to train LoRAs for character consistency and unique locations while avoiding common pitfalls like over-fitting and dataset bias.
#121: Decoding RLHF: Why Your AI is So Annoyingly Nice
Ever wonder why AI is so polite? Herman and Corn dive into the mechanics of RLHF and how "niceness" gets baked into modern language models.
#53: Instructional vs. Conversational AI: The Distinction Nobody Talks About
Instructional vs. conversational AI: a crucial distinction reshaping how AI is built. Discover why it matters for the future of AI development.
#38: AI Supercomputers: On Your Desk, Not Just The Cloud
AI supercomputers are landing on your desk! Discover why local AI is indispensable for enterprises facing API costs, latency, and privacy.