Alright, we are diving into a heavy one today. Daniel sent over a prompt asking about the shift from organized sleeper cells to the unpredictable world of lone wolf actors. It is a topic that has been sitting in the back of my mind ever since that 2025 Las Vegas incident where that individual acted completely solo, no handlers, no foreign phone calls, just a sudden, violent outburst that caught everyone off guard. It was such a departure from the high-level coordination we saw in the early 2000s.
Herman Poppleberry here, and Corn, you are hitting on the exact nerve that is twitching inside every intelligence agency right now. The old model of counter-terrorism was built on interception. You find the communication link between a handler and a cell, you break the chain, and you stop the plot. Think of it like a classic game of Telephone—if you cut the wire, the message never gets through. But with lone wolves, there is no wire. By the way, listeners, just a quick note that today’s episode is actually being powered by Google Gemini 3 Flash. It is helping us synthesize some of this dense research Daniel flagged for us.
It is wild to think about a script being written by the very tech we are discussing as a double-edged sword. But let’s get into the guts of this. When we talk about a lone wolf, are we talking about someone who is truly alone, or is that a bit of a misnomer? Because I have seen some 2025 research from the Center for Naval Analyses suggesting we should actually ditch the term "lone wolf" entirely.
They are absolutely onto something there. The term "lone wolf" implies a predator wandering the wilderness in total isolation, but the reality in 2026 is that these individuals are usually swimming in a very specific kind of digital ocean. They might not have a commanding officer or a local cell leader giving them orders, but they have what researchers call "moral oxygen." They are part of online communities on Discord, Telegram, or even gaming platforms where their grievances are validated every single hour of the day. They are lone actors, but they are socially saturated.
So it is less about a secret meeting in a basement and more about a kid in his bedroom with a headset on, getting cheered on by a thousand strangers across the globe. That feels much harder to track. If there is no "mission briefing," how does the radicalization actually start? Daniel mentioned this "Staircase to Terrorism" model by Fathali Moghaddam. Is that still the gold standard for understanding how someone goes from a normal citizen to a domestic threat?
It is the foundational model, yeah. Think of it as a vertical progression. The ground floor is a perceived grievance. It could be anything—economic displacement, a feeling of cultural erasure, or even personal trauma. The individual feels the world is unfair. On the first floor, they look for ways to improve their condition but feel blocked by the system. By the second floor, they start displacing their aggression onto a "target." They find a scapegoat. The third floor is where the "moral engagement" happens—they join a digital community that tells them violence is not just acceptable, but necessary.
I’m curious about that jump from the second to the third floor. Is that where the "echo chamber" effect really takes hold? Like, if I’m already looking for someone to blame, and I find a group that says, "Yes, it’s exactly who you think it is," does that speed up the climb?
It’s the validation of the bias. On the third floor, you aren't just angry anymore; you’re part of a "cause." You start to adopt the vocabulary of the group. You stop seeing the "target" as a person and start seeing them as an obstacle or an enemy. By the time you reach the fourth floor, you’ve entered a state of "categorical thinking." It’s us versus them. There is no middle ground, no nuance.
And that is where the algorithmic amplification kicks in, right? Because if I am on the first floor looking for answers, the YouTube or TikTok algorithm isn't going to show me a balanced documentary on socio-economics. It is going to show me the most high-energy, high-conflict content because that keeps me on the platform.
You have hit the nail on the head. The algorithms are essentially building the staircase for you. In 2024 and 2025, we saw a massive surge in what we call "MUU" ideologies—Mixed, Unstable, and Unclear. This is a nightmare for profilers. It is no longer just "this person is a radical Islamist" or "this person is a neo-Nazi." Now, they are cherry-picking. You might have someone who is an eco-terrorist but also deeply embedded in incel culture, while also holding some weird accelerationist views about bringing down the power grid. They are building a custom-tailored manifesto that fits their specific personal anger.
It is like a salad bar of extremism. You just take a scoop of whatever makes you feel the most justified in your rage. But how do you even begin to profile that? If the "flavor" of the extremism is constantly shifting, how do you know what to look for?
That’s the trillion-dollar question. Traditional profiling looked for specific symbols or literature. Today, investigators have to look for the process of radicalization rather than the content. It doesn't matter if they are reading Marx or Manson; what matters is the isolation, the fixation, and the dehumanization of others. We saw this in the 2025 Las Vegas case you mentioned earlier. That was a watershed moment because the guy had zero criminal record and no known ties to any group. How does someone like that go from zero to sixty without triggering a single red flag?
Well, the truth is, the flags were there, but they were "analog" flags. An FBI study of fifty-two lone-actor attacks found that in nearly every single case, bystanders—family members, coworkers, neighbors—saw something concerning. They noticed the "leakage." That is a technical term for when a lone actor starts dropping hints. They might post a cryptic countdown on social media, or they start buying tactical gear and talking about "the day of reckoning."
Leakage is a fascinating psychological phenomenon. It’s almost as if the individual wants to be noticed on some level. They want the world to know they are becoming powerful. In a 2024 case in Seattle, a young man told his gaming group that "the server was going to go quiet soon" and that they should "watch the news." Nobody reported it because they thought he was just being dramatic or "edgy."
But people don't want to be the person who calls the feds on their brother or their best friend. There is a huge social barrier there. And honestly, if I see my neighbor buying a bunch of camouflage gear, I am probably just going to think he is going through a mid-life crisis and joining a paintball league. I am not thinking he is a sleeper threat. How do we distinguish between "weird hobby" and "imminent threat"?
And that is the gap law enforcement is trying to close. In 2025, the FBI’s Guardian system—which is their main tool for tracking domestic threats—saw a forty percent increase in flagged cases. A lot of that wasn't from high-tech surveillance; it was from community reporting. But here is the problem: the "see something, say something" paradigm is incredibly leaky. For every one person who actually intends to do harm, there are ten thousand people who are just angry, loud-mouthed, and completely harmless.
That brings up a massive concern regarding the "stochastic terrorism" model. If a public figure or a popular streamer uses inflammatory rhetoric, they aren't technically telling anyone to commit a crime. But statistically, if you scream at a million people that "Group X is destroying your life," one of those million people—who might already be on the fourth floor of that staircase—is going to snap. It’s like throwing a match into a dry forest; you don’t know which tree will catch fire, but you know a fire is likely. How do you police that without turning into a total surveillance state?
You are touching on the legal lightning rod of the decade. There was that 2024 Supreme Court ruling, Doe versus FBI, which basically said the government cannot monitor social media accounts without a specific warrant unless the posts are entirely public. It put a real dampener on the "pre-crime" style of surveillance. Intelligence agencies can't just hover over every Discord server waiting for someone to say something spicy.
Which I think most people would say is a good thing for civil liberties, but it creates this massive blind spot. I was reading about a case in Ohio from late 2024. A nineteen-year-old was radicalized through a server that started as a Minecraft community. It was just kids building blocks, but then a few older guys came in, started sharing memes that were "ironically" extremist, and then slowly moved the conversation to a private, encrypted channel. By the time he was arrested, he had a basement full of chemicals and a detailed plan to hit a local substation.
That is the gamification of extremism. ISIS pioneered this back in the day, but by 2025, it has become incredibly sophisticated. They use actual video game mechanics—leaderboards, "achievements" for completing certain tasks, and even high-production-value recruitment videos that look like Call of Duty trailers. For a young person who feels socially isolated, this gives them a sense of belonging and a "quest." It turns a horrific act into a level they need to beat.
Wait, you mentioned "achievements." Are people actually getting digital badges for radical behavior? How does that even work in practice?
It’s more subtle than a literal Xbox achievement popping up, but it’s the same psychological loop. In some of these private Telegram channels, members are given "ranks" based on the quality of the propaganda they create or the "ops" they perform—which could be as simple as flyering a neighborhood or doxxing an opponent. Each step up the ladder provides a hit of dopamine and a deeper sense of community. By the time they are asked to do something violent, they are so desperate to maintain their status in that digital hierarchy that they don't even question the morality of the act.
It is terrifying because it targets the most vulnerable demographic. These kids aren't looking for a political revolution; they are looking for a reason to get out of bed in the morning. If the world feels like it has no place for you, and someone online tells you that you can be a "hero" in their story, that is a powerful drug.
It really is. And the "mosaic theory" of intelligence is how agencies try to fight back. Instead of looking for one big smoking gun, they look for a hundred tiny pebbles. A purchase of a certain type of fertilizer here, a sudden surge in interest in local infrastructure maps there, a post on a forum about "soft targets." If you put enough of those pebbles together, you get a picture. But as we saw in a 2024 case in Germany, the legal framework often prevents intervention until it is almost too late. They saw a guy buying chemicals, they knew he was visiting extremist sites, but they couldn't move in because he hadn't technically broken a law yet.
So we are stuck in this "wait and see" mode, which is a gamble with human lives. What about AI? We are sitting here using Gemini to write this, but are the "good guys" using AI to proactively spot these patterns?
They are, and it is controversial as hell. By early 2026, several agencies started using Large Language Models to scan public forums for "linguistic markers." They aren't looking for keywords like "bomb"—those are too easy to filter. They are looking for a transition in tone. They look for when a person moves from "the world is bad" to "I must act against the world." There is a specific shifting point in the way people use pronouns and verbs when they move from ideological anger into tactical readiness.
"Linguistic markers of readiness." That sounds like something straight out of a Philip K. Dick novel. Can you give me an example? Like, what does that transition actually sound like in a text post?
It’s often a shift from "we" to "I," or from passive to active voice. Instead of saying "Someone should do something about this," the actor starts saying "I have decided what needs to happen." There’s also a decrease in cognitive complexity—their sentences get shorter, more direct, and less open to debate. They stop using words like "maybe" or "perhaps." Their world becomes binary. AI is much better at catching those subtle shifts in syntax than a human moderator who is just skimming for slurs.
But does it actually work? Or are we just creating a system that flags every depressed teenager who writes a dark poem in his journal?
That is the "false positive" problem. A 2024 study found that seventy-three percent of lone wolf attackers showed observable behavioral changes in the three months prior to their attack. The data is there. The problem is that millions of non-violent people also show those same behavioral changes. People go through breakups, they lose jobs, they get into weird hobbies. If you flag everyone who is acting "odd," the system collapses under its own weight. It’s the "needle in a haystack" problem, except the haystack is made of needles that look slightly different.
It seems like the real failure point isn't a lack of data, but a lack of connection. Think back to the Boston Marathon bombing in 2013—there were warnings from foreign governments, but they didn't get connected to the local police in time. Fast forward to 2025, and we have the Richmond plot that was actually prevented because digital surveillance caught a guy trying to bypass encryption protocols to talk to a known extremist "mentor" overseas.
The Richmond case is a great counter-example. That was a success of the "National Counterintelligence Task Forces" or NCITF. They were able to coordinate between federal intelligence and local Richmond police in real-time. It turns out the guy wasn't as "lone" as he thought. He was being "groomed" by a handler in a different country who was providing him with technical manuals on how to bypass security. This is the "transnational lone actor" model. You are physically alone, but digitally tethered to a global movement.
So the "lone wolf" is often just a remote-controlled asset who doesn't realize he is an asset. He thinks he is the protagonist of his own movie, but he is just a pawn in someone else’s game. That is actually more depressing than the "crazy loner" narrative. It suggests a level of exploitation that is hard to stomach.
It is, because it is scalable. You can't easily train and insert a hundred sleeper cells into a country without getting caught. But you can put a hundred radicalizing videos on the internet and wait for ten "lone wolves" to sprout up on their own. It is an asymmetrical war. The cost of defense is massive—billions in surveillance, police training, and community outreach—while the cost of the "attack" is just the price of a laptop and an internet connection.
And let’s be honest, the "lone wolf" label also helps the groups behind the radicalization. If an attack happens and there is no direct link, the groups can stay "clean" legally. They can say, "Oh, we didn't tell him to do that, he was just a fan of our content." It is plausible deniability on a global scale.
Well, not "exactly," but you are on the right track. It allows these groups to maintain a presence on mainstream platforms far longer than they otherwise would. If they were caught issuing direct orders, they would be banned in a heartbeat. But if they just host "discussions" about grievances, they can hide behind free speech protections. It’s the "Will no one rid me of this turbulent priest?" defense. They set the stage, and then act surprised when someone walks onto it with a weapon.
We have talked a lot about the "how" and the "why," but what is the practical takeaway for the average person listening to this? I mean, we aren't all intelligence officers. But we are the "bystanders" Daniel mentioned in those FBI stats. If I am sitting at Thanksgiving and my cousin starts talking about some weird, dark stuff he found on a Discord server, what is the line between "he is just being edgy" and "I need to do something"?
That is the hardest question in modern society. The experts say to look for "fixation." Everyone gets mad about politics or the state of the world. But when that anger becomes an obsession—when it is all they talk about, when they start cutting off friends who disagree, and especially if they start talking about the "necessity" of a specific event—that is a red flag. It isn't about the ideology; it is about the behavior. Sudden withdrawal from their normal life coupled with an intense focus on a "target" or a "day" is the most common pre-attack indicator.
What about the physical side of it? We focus so much on the digital, but these things eventually have to manifest in the real world. Is there a specific "preparation phase" that neighbors should be aware of?
There’s a phase called "tactical reconnaissance." Even a lone wolf usually visits their target beforehand. They might take photos of security cameras or time the arrival of police patrols. In the 2024 attempted attack on a mall in Florida, the perpetrator visited the food court six times in one week just to watch the security guards' shift changes. If you see someone acting out of place—not just "weird," but systematically observing security—that’s a pebble you should add to the pile.
And we have to mention mental health here, too. There is this common misconception that lone wolves are all "crazy" or clinically insane. But the data shows most of them aren't. They might be depressed or socially isolated, but they are often making very rational choices within a very irrational ideological framework. If you treat it purely as a mental health issue, you miss the political and social drivers that are actually pushing them up that staircase.
Right, and if you only treat it as a police issue, you miss the chance for early intervention. This is why some cities are experimenting with "Threat Assessment Teams" that aren't just cops. They include teachers, mental health professionals, and community leaders. The goal is to "off-ramp" someone before they get to the top floor of the staircase. If you can address the grievance or the isolation on the first floor, you don't have to deal with a bomb on the fifth floor. It’s about interrupting the climb.
It feels like a race against the algorithm. Can a community leader or a family member reach someone faster than a TikTok feed that is designed to keep them angry? In 2026, the algorithm has a huge head start. It is with them in their pocket twenty-four hours a day. It’s like trying to talk someone down from a ledge while they have a megaphone in their ear screaming "Jump!"
It is a profound challenge. And as AI gets better at generating personalized content, we might see "personalized radicalization." Imagine an AI that knows exactly which buttons to push to make you specifically feel like the world is ending. It could generate a custom manifesto, custom videos, and custom "evidence" to fit your existing biases. We’ve already seen deepfakes used to convince people that their local government is doing something it isn’t. When that becomes automated at scale, the "staircase" becomes an escalator.
That is a terrifying thought to end on, Herman. A world where everyone has their own private, AI-generated radicalization chamber. It really changes the stakes of "digital literacy." It’s no longer just about spotting a fake news article; it’s about recognizing when your entire reality is being curated to push you toward a violent conclusion.
It really highlights the importance of staying connected to the real world—to people, to family, to things that aren't behind a screen. The best defense against a "lone wolf" scenario might just be making sure nobody feels truly alone in the first place. Radicalization thrives in the dark corners of isolation. When you bring someone back into a physical community, the "moral oxygen" of the online extremist group starts to thin out.
That is the most human solution to a very high-tech problem. The more we retreat into these digital silos, the more vulnerable we become to the kind of manipulation that leads to these tragedies. It’s about rebuilding the social fabric, one neighbor at a time.
Precisely. It sounds simple, but in an age of hyper-polarization, it’s the hardest work there is.
Well, this has been a deep dive. Daniel, thanks for the prompt—it definitely gave us a lot to chew on. I am going to be looking at my Discord notifications a little differently tonight. I might even go outside and actually talk to the guy across the street.
And I am going to be keeping an eye on the latest NCITF reports. This field is moving so fast, by the time this episode drops, there will probably be a new study out on the 2026 trends. Thanks as always to our producer, Hilbert Flumingtop, for keeping us on track and making sure we don't wander too far into the weeds.
And a big thanks to Modal for providing the GPU credits that power this show. We couldn't do these deep dives into the data—or use Gemini to help us organize these thoughts—without them. This has been My Weird Prompts. If you are enjoying the show, a quick review on your podcast app really helps us reach new listeners who might be looking for a deep dive into the weird world of AI and security.
You can also find us at myweirdprompts dot com for our full archive and the RSS feed. We love hearing from you, so keep those prompts coming. We will be back next time with another deep dive from Daniel’s list.
Stay curious, stay skeptical, and maybe go talk to your neighbor today. You never know who might need that connection.
See ya.
Bye.