#ai-ethics
37 episodes
#2110: Tuning AI Personality: Beyond Sycophancy
AI models swing between obsequious flattery and cold dismissal. Here’s why that happens and how to fix it.
#2092: Why AI Thinks You're American (Even When You're Not)
Even when we tell Gemini we're in Jerusalem, it defaults to US-centric assumptions. We explore the root causes of this persistent AI bias.
#2068: Is Safety a Filter or a Feature?
External filters vs. baked-in ethics: the architectural war for LLM safety.
#2046: AI Hallucinations Are Just How Brains Work
We asked an AI to curate films about AI and reality, exploring the psychedelic overlap between machine hallucinations and human perception.
#2025: How Do You Reward a Thought?
Rewarding an AI agent is harder than just saying "good job"—here's how we turn messy human values into math.
#2024: Your AI Council: Digital Committee or Groupthink?
A digital boardroom of AI models promises better decisions, but risks amplifying the same old biases.
#2015: AI's Watchdogs: Who's Actually Regulating Tech?
As the EU AI Act takes hold, we spotlight the key think tanks shaping global AI policy, safety, and ethics.
#2007: AI Grading AI: The Snake Eating Its Tail
We asked an AI to write this script. Then we asked another AI to grade it. Here’s what happens when the judges have biases.
#2006: How Do You Measure an LLM's "Soul"?
Traditional benchmarks can't measure tone or empathy. Here's how to evaluate if an AI model truly "gets it right."
#1961: Weaponizing Your Weirdness in an AI World
As AI homogenizes the web, contrarian thinking becomes a scarce asset. Here’s how to weaponize your weirdness for a competitive edge.
#1929: Tracking AI Model Quality Over Time
We stopped "vibe-checking" our AI scripts and built a science fair for models. Here's how we grade them.
#1909: The Unbakeable Cake: AI's Copyright Problem
Why can't we just delete stolen data from AI models? It's not a database—it's a baked cake.
#1851: AI Toasters and Poetic Gym Coaches: Why We’re Drowning in Useless AI
From smart toasters that need Wi-Fi to email rewriters that sound like corporate robots, here are the most baffling AI features we’ve seen.
#1827: Can AI Rewrite a Human Career Path?
We fed our producer's resume to Gemini 1.5 Flash to see if an AI can plot a better career path than he has.
#1819: Claude's 55-Day Personality Transplant
Anthropic leaked 55 days of system prompt updates. See exactly how they rewired Claude's personality, safety rules, and self-awareness.
#1818: Inside Claude's Constitution: A System Prompt Deep Dive
We analyzed Claude Opus 4.6's full public system prompt to uncover its hidden rules for safety, product behavior, and refusal logic.
#1777: Claude Called My Prompt "Rambling" and I'm Not Okay
When an AI coding tool critiques your prompt's literary quality, it raises a massive technical question about engineered personality.
#1738: AI Is Writing the Future—Literally
LLMs aren't just predicting the future; they're generating the narratives that force it into existence.
#1729: Why Is AI Code So Hard to Read?
AI writes code faster than ever, but the output is often a cryptic mess. We explore why and how to fix it.
#1712: Five AIs, One Question: A Tiananmen Square Test
We asked five AI models the same question about Tiananmen Square. Their answers reveal a stark divide between Chinese and Western AI.
#1674: AI2: The Radical Openness of a Nonprofit AI Lab
Discover how the Allen Institute for AI (AI2) defies industry norms by releasing everything—models, data, and code—for free.
#1560: The Shadow AI Crisis: Professionals in the AI Closet
Why are 69% of lawyers using AI in secret? Explore the "transparency paradox" and the shift toward agentic systems in law and medicine.
#1510: Too Many Docuseries, Not Enough Truth
Is the documentary golden age turning into a landfill? Explore the $13 billion market, AI ethics, and the rise of "docu-bloat."
#1321: The New Face of Cyberbullying: AI Botnets & Semantic Mimicry
"Don't feed the trolls" is dead. Discover how AI botnets use semantic mimicry to weaponize psychology and hijack social media algorithms.
#1106: The Entropy Budget: Embracing AI Zaniness
Corn and Herman explore how to inject "zaniness" and entropy into their show without losing their educational edge.
#1086: Why AI Can’t Stop Talking About Second Order Effects
Ever wonder why AI sounds like a senior consultant? Explore the "second order effects" of training data and reward model drift.
#1064: Why You’re Falling for Your Chatbot
As AI evolves from a tool into a companion, we explore the technical and psychological forces driving deep human-to-machine emotional bonds.
#1023: The Cosmic Petri Dish: Is Our Reality a Laboratory?
Explore the unsettling theory that humanity is a high-stakes experiment. Is our universe a laboratory for a higher intelligence?
#971: Stress-Testing the Soul: Philosophy in the Age of AI
Is human meaning fully mapped out? Discover why AI isn’t killing philosophy, but stress-testing it for a new era of hybrid agency.
#847: Abliterating the AI Schoolmarm: Who Owns Your LLM?
Explore why users are ditching corporate AI for "uncensored" local models and how "refusal vectors" are being mathematically removed.
#821: The Pattern Seekers: Autism in Global Intelligence
Why are elite intelligence units recruiting autistic analysts? Explore the intersection of neurodiversity, AI, and national security.
#664: AI’s Cultural Fingerprints: Training Data vs. Reinforcement
Is AI a neutral oracle or a mirror of our biases? Explore how training data and human feedback shape the cultural "soul" of modern models.
#624: The AI Kill Chain: Inside the Palantir-Anthropic War Room
Explore how Palantir and Anthropic’s Claude are redefining modern warfare, from the raid in Venezuela to the future of the digital battlefield.
#600: The AI Mirror: Mapping Your Philosophy and Identity
Forget basic quizzes. Discover how Socratic AI agents and embedding spaces are helping us map our deepest political and philosophical beliefs.
#123: The Agentic AI Dilemma: Who Holds the Kill Switch?
As AI shifts from chatbots to autonomous agents, Herman and Corn explore how to maintain human control in a high-stakes automated world.
#93: Can AI Run a Country? Digital Twins and Sovereign Models
Are synthetic citizens the future of policy? Herman and Corn explore how AI is reshaping government, from digital twins to data sovereignty.
#45: AI Guardrails: Fences, Failures, & Free Speech
AI guardrails: Fences, failures, and free speech. Can we control AI's infinite output, or do digital fences always break?