Daniel sent us this one, and I want to read the core of it because it captures something I think a lot of people living here right now are feeling viscerally. He's asking: could this conflict end up being remembered as a turning point in our relationship with information itself? Not just the war, but the experience of trying to understand it. Because from where he's sitting, everyone in Israel is essentially living in the dark. You've got a hard state-manufactured blackout in Iran, but then you've also got this softer version here, where Trump says the war is almost won, then an hour later says something different, then contradictory headlines, then silence from official channels that's somehow louder than anything they're actually saying. And into that vacuum rushes OSINT, open-source intelligence, all this crowdsourced signal-hunting. But is it filling the gap, or just generating a new layer of noise? He's asking us to compare this to prior moments: the fog of war, World War propaganda, the Gulf War's CNN effect. Is this genuinely new? Or is it just the same old confusion with better cameras?
That framing hits hard, because I've been living inside this same fog. And before we dive in, quick note: today's script is courtesy of Claude Sonnet four point six, our friendly AI down the road, doing its thing.
Doing its thing. So let's name the actual texture of what's been happening, because I think the confusion is more structurally interesting than it looks on the surface. It's not just that we don't know facts. It's that the authoritative channels that are supposed to resolve factual disputes are themselves the source of the contradictions.
That's exactly what makes this feel different from ordinary wartime uncertainty. The Trump administration has, according to a Middle East Monitor piece from April fourteenth, imposed what they're describing as a complete information blackout over the conflict theater. That phrase, complete information blackout, is being used by analysts to describe something that has no real peacetime equivalent. It's a fog of war that's been deliberately architected at the level of the most powerful communicator on the planet, which is the American presidency.
The thing about Trump specifically is that he doesn't stay silent. He just says contradictory things in rapid succession. Which is almost more disorienting than silence. Silence you can sit with. You know there's a gap, you can work around it. But when the signal is loud and confident and wrong in three different directions inside the same news cycle, your brain has nowhere to put it.
The cognitive load of that is unusual. I've been following conflict communication for a long time, and I'm struggling to think of a precedent where the primary source of confusion wasn't absence of information, but rather the velocity of contradictory information from credentialed sources. That's a distinct phenomenon.
The Israel Democracy Institute ran a poll in March that showed seventy-two percent of Israelis say they distrust official war updates. That number was forty-five percent in February. That's a twenty-seven point swing in about six weeks. That's not gradual erosion of trust. That's a cliff.
Which tracks with what Daniel is describing from lived experience. When you lose trust in official channels that fast, you don't just become skeptical. You become hungry. You go looking for something to fill the space. And what's available to fill the space right now is OSINT, open-source intelligence, which is simultaneously the most exciting development in wartime epistemology in a generation and, I think, one of the most dangerous.
We're going to get into the mechanics of that. But I want to sit with Daniel's question for one more moment, because I think it's the sharpest version of what he's asking: is this moment new? Is the relationship between conflict and information actually being broken in a new way? Or are we just more aware of the breaking because we're inside it?
My honest answer is: both, and the both matters. The fog of war is ancient. Thucydides was writing about information failures in the Peloponnesian War. But the specific conditions right now, the speed, the decentralization, the credentialed contradictions, the OSINT ecosystem, the algorithmic amplification, those are new variables. And new variables in an old system can produce new failure modes.
Or as I prefer to say, new ways for everything to fall apart.
Let's start by dissecting the two extremes, because I think the contrast is instructive. You've got Iran on one end, which is a hard blackout, state-enforced, technically implemented. And then you've got Israel on the other end, which is not a blackout at all. It's the opposite. It's a flood. But both populations are, in some meaningful sense, in the dark.
The Iran side of this is worth being precise about. NetBlocks data, which tracks internet connectivity in real time, showed connectivity in Iran dropping to roughly five percent of normal levels starting April twelfth. That's not throttling. That's a near-total severing. The last time we saw anything close to that was Iran's own November 2019 shutdown during the fuel protests, when they cut connectivity for about a week to suppress organizing. But that was domestic. This has a different character because it's happening in the context of active military operations and it's being maintained across a longer window.
The Iranian population is essentially unable to verify anything happening in their own country. They can't confirm casualties, they can't confirm the status of infrastructure, they can't communicate across regions. The state controls the entire epistemic environment.
Historically that model has been used with varying effectiveness. Egypt in 2011 tried it. They cut internet access on January 27th and restored it five days later. It didn't stop the Tahrir Square protests. Kashmir in 2019, India imposed a communications blackout that lasted, depending on how you measure it, somewhere between several months and over a year. Those are the precedents. The effectiveness varies enormously based on how organized the population already is and whether there are physical gathering points that don't depend on digital infrastructure.
Here's what I find interesting about the Iran case specifically. The blackout doesn't just suppress organizing within Iran. It also suppresses the outflow of information. Which means the OSINT community outside Iran is working with almost nothing from ground level. No citizen journalism, no leaked footage, no local reporting. The only inputs are satellite imagery, signals intelligence from state actors who are sharing selectively, and whatever manages to leak through the five percent of connectivity that remains.
Which is a thin data environment for the kind of real-time analysis people are trying to do. There are dashboards, SOCRadar runs a cyber conflict tracker, the Critical Threats project at AEI has been publishing evening special reports, Iran Monitor has a real-time feed. These are serious tools built by serious analysts. But they're all working with the same degraded input set. The signal that OSINT is claiming to find is being extracted from a very noisy and very sparse data stream.
Then you flip to the Israeli side and it's the mirror image. Not too little information. But without the institutional scaffolding to evaluate it.
The Israeli government press conference on April eighteenth is a good case study here. Officials contradicted each other in real time. Not on peripheral details. On substantive questions about the status of operations and the trajectory of the ceasefire. That kind of visible incoherence from official sources does something specific to public trust. It's not just that people don't believe what they're hearing. It's that they lose the category of "official statement" as a meaningful input. When the category itself becomes unreliable, you start treating government communications the same way you treat a random Telegram post.
Which is a alarming epistemological collapse. Because official statements, even when you're skeptical of them, serve an anchoring function. They give you something to push against. When they become noise, you've lost your reference point.
Into that vacuum, OSINT rushes. And I want to be careful here, because I think there's a misconception that OSINT is inherently more reliable than traditional journalism, and that misconception is doing real damage right now. OSINT, at its best, is disciplined, methodical, source-critical work done by people who understand the limits of what they're seeing. At its worst, it's a viral Telegram post with a confident caption and no chain of custody on the underlying image.
There was a specific case in mid-April, around the fifteenth, where a claim about Iranian troop movements went viral across OSINT-focused channels. Multiple accounts amplified it with high confidence. It was later debunked. But by the time the debunking circulated, the original claim had already shaped a news cycle and, I'd argue, shaped public perception of what was happening operationally.
That's the specific failure mode of decentralized intelligence in a fast-moving conflict. The speed asymmetry. False information travels at the speed of a share. Correction travels at the speed of verification. Those are not the same speed.
This isn't new, exactly. The Gulf War gave us the CNN effect, which was this realization that real-time broadcast journalism was collapsing the traditional delay between events and public awareness. Peter Arnett reporting from Baghdad in 1991 was a new kind of thing. Military planners had to adapt to the fact that their operations were being watched live. But the CNN effect was still a relatively narrow channel. There was a small number of credentialed journalists, a small number of broadcast networks, and a production process that created at least some lag between raw footage and broadcast.
What we have now is structurally different. The number of channels is effectively unlimited. There's no production process. And the incentive structure on the major platforms rewards speed and emotional salience, not accuracy. Twitter, or X, whatever we're calling it, Telegram, they're not designed to surface the most accurate account of events. They're designed to surface the most engaging one. And in a conflict, the most engaging account is almost never the most accurate one.
Though I'd push back slightly on the idea that the pre-digital fog of war was somehow cleaner. World War Two propaganda on both sides was extraordinarily sophisticated and extraordinarily dishonest. The difference is that the propaganda was produced by a small number of centralized actors with clear institutional interests, so you could at least model the distortion. You knew the British Ministry of Information was going to frame things a certain way. You could adjust. The distortion today is coming from thousands of actors with different interests and different levels of competence and different levels of good faith. That's much harder to model.
That's a really important distinction. Centralized propaganda is, paradoxically, easier to navigate than decentralized noise, because the centralization gives you something to critique. You can identify the source, identify the interest, identify the pattern. With decentralized noise, the epistemology becomes almost impossible. You're not fighting one distortion. You're fighting a field of overlapping distortions with no common structure.
Which is what makes Daniel's question so sharp. He's asking whether this is a turning point in our relationship with information. And I think the answer might be that it's a stress test. The conditions that produce this kind of chaos have been building for fifteen years. The smartphone, social media, the collapse of traditional media business models, the rise of OSINT culture. This conflict is the first time all of those variables have been active simultaneously in a high-stakes, fast-moving, geopolitically central conflict. And it's revealing exactly how unprepared we are.
The Israel Democracy Institute numbers make that visceral. Forty-five percent distrust in February to seventy-two percent distrust in March. That's not a population that's gradually losing faith. That's a population that's been pushed past a threshold. And the question of what happens after you cross that threshold, where do people go for their epistemic anchor, that's the question that I think is new and frightening.
Because the alternatives to official channels are not obviously better. They're faster, they're more numerous, they're more responsive to what people want to hear. But they're not more accurate. And in some cases they're dramatically less accurate.
Speed over accuracy. Confidence over calibration. Virality over verification.
We've got a population in Iran that can't access any information. We've got a population in Israel that's drowning in information and can't evaluate any of it. And both populations are, in some functional sense, equally in the dark about what's actually happening. That's a strange and uncomfortable symmetry.
And I think it's the thing that makes this conflict potentially historically distinctive. Not the military operations, not the geopolitics, but the epistemic situation. The question of whether this is a prototype for what all future conflicts look like, that's what I want to dig into.
Let's do that.
The thing I keep coming back to is the word "simultaneously." Because historically, you get one or the other. You get a society that's information-starved, or you get one that's information-saturated. What we're looking at right now is both, running in parallel, in the same conflict, in adjacent countries. That's unusual.
The effects aren't symmetrical even if the outcome is. Iran's blackout is imposed from above. It's a deliberate act of state violence against the epistemic environment. Israel's chaos is self-generated. Nobody decided to flood the zone with contradictory Telegram posts and shifting presidential statements. It emerged from the structure of the information ecosystem itself.
Which raises a question that I don't think gets asked enough. Which is harder to recover from? A society that was deliberately kept in the dark, or a society that drowned itself in noise and lost the ability to distinguish signal? Because the Iranian government, at some point, turns the internet back on. The Iranian people then have to reconstruct their understanding of what happened. That's painful, but it's a defined problem. Israel's situation is different. There's no switch to flip. The noise doesn't stop when the ceasefire holds or breaks.
The trust deficit compounds. That's the thing. Once seventy-two percent of a population has decided official war updates aren't worth taking seriously, you don't just restore that trust by being more accurate next week. The category is damaged.
That's what makes this a story about information as infrastructure. We talk about physical infrastructure, power grids, communications cables, as things that can be attacked and degraded. What this conflict is demonstrating is that epistemic infrastructure, the shared frameworks that allow a society to form collective judgments about reality, that's also something that can be degraded. And once degraded, it's much harder to rebuild than a power grid.
The stakes here aren't just "who wins the news cycle." It's whether the societies involved emerge from this conflict with any functional shared sense of what happened. And Iran's blackout is a case in point—it shows how regimes try to control that narrative.
The mechanics of how Iran's blackout is enforced tell you something about what the regime is afraid of. This isn't a crude kill switch. NetBlocks has been tracking it since April twelfth, and connectivity dropped to roughly five percent of normal levels within about forty-eight hours. That's not a technical failure. That's a precision instrument. Border gateway protocol routes get withdrawn, specific autonomous system numbers get null-routed, international exchange points get throttled. It requires active, sustained technical intervention at the infrastructure level.
Which means it's expensive to maintain. Not just technically but politically. Every day the blackout holds is a day the regime is visibly demonstrating that it doesn't trust its own population with information about what's happening to them.
There are historical precedents that are instructive here. Egypt in 2011 during the Tahrir Square period. India imposed something similar in Kashmir in 2019, which ran for months. Iran itself did a version of this during the fuel protests in 2019. But what's notable is that each successive instance has been more technically sophisticated and more prolonged. The regimes have been learning. The 2019 Iran shutdown lasted about a week. This one has been running longer and has been tighter.
The lesson they took from 2019 was apparently "we didn't go far enough.
Which is grim but probably accurate. And the effectiveness is real, even if it's not total. The five percent that remains is mostly VPN traffic and satellite connections, which are slow, expensive, and accessible primarily to people who already had the technical sophistication to set them up before the blackout. So the information that does escape is filtered through a particular demographic. It's not representative. It's the most technically capable and probably the most politically motivated segment of the population.
Which means the OSINT picture of what's happening inside Iran is being painted by a self-selected and skewed sample. The people who can still transmit are not the median Iranian.
And that has downstream effects on how Western analysts are reading the situation. The signal that's coming out is, by construction, not the whole picture. You're getting the loudest, most connected voices, not the broadest cross-section.
Now flip to the Israeli side. The problem is almost the inverse. Trump makes a statement on a Tuesday morning, the war is almost won. By the afternoon he's walked it back or contradicted it or someone in his administration has said something that doesn't square with what he said. The Middle East Monitor piece from April fourteenth described the overall information environment around the conflict as a digital-age fog of war, and I thought that framing was apt, because fog of war traditionally implies that the confusion is incidental, a byproduct of the chaos of events. What's unusual here is that some of the confusion appears to be deliberate policy.
The April fifteenth troop movement claim is the clearest example of where this gets dangerous in practice. A claim circulated across multiple OSINT-focused Telegram channels and X accounts with high confidence. Specific coordinates, specific unit designations, the kind of granular detail that signals credibility. It was later debunked. But the debunking came hours after the original claim had already been cited in news coverage, already shaped how people were modeling the operational situation. The correction existed, but it existed downstream of a news cycle that had already closed.
This is where the Gulf War comparison becomes interesting rather than just nostalgic. The CNN effect in 1991 was new. Peter Arnett in Baghdad, live coverage of Scud intercepts, the sense that the public was watching the war in something close to real time. That was a real rupture with how wars had been covered. But the CNN effect had a production bottleneck. There were maybe a few dozen journalists with access and a handful of networks with broadcast infrastructure. The lag between raw event and public broadcast was measured in minutes to hours. The editorial layer was thin but it existed.
What we have now has eliminated that bottleneck entirely. The number of people who can publish to a global audience from a conflict zone, or from their apartment while watching a conflict zone, is effectively unlimited. And the incentive structure of the platforms they're publishing on does not reward accuracy. It rewards speed and emotional salience. A post that confidently claims to know what's happening outperforms a post that accurately conveys uncertainty.
The rational strategy for someone who wants engagement is to be confident and fast rather than calibrated and slow. Which means the information ecosystem is actively selecting against the epistemic virtues you actually want in a war correspondent.
OSINT sits right in the middle of this tension. At its disciplined best, it's valuable. The SOCRadar cyber conflict tracker, the Critical Threats evening reports, Iran Monitor, these are serious analytical tools. People with real expertise doing real source-critical work. But that work gets aggregated into the same channels as the confident and wrong viral post, and there's no visible quality gradient for the average consumer. They look the same. A thread from a rigorous analyst and a thread from someone who's just pattern-matching on vibes have essentially the same presentation layer.
Which is the liability. The lifeline and the liability aren't two different things. They're the same thing. The openness that makes OSINT valuable is exactly what makes it dangerous—and that danger creates ripple effects far beyond the immediate misinformation.
Right, and that dynamic is what produces the knock-on effect that doesn't get talked about enough. People don't just lose trust in bad sources. They lose trust in the category of sourcing itself. Once you've been burned by a confident OSINT thread that turned out to be fabricated, the next rigorous analyst with a genuine scoop gets tarred with the same brush. The epistemically careful get punished for the sins of the epistemically reckless.
You end up with a population that's not just skeptical of governments. They're skeptical of everyone. Which sounds like healthy critical thinking until you realize that total skepticism is functionally the same as total credulity. If nothing is reliable, you're free to believe whatever you already wanted to believe.
The April eighteenth press conference was a vivid demonstration of this. Israeli officials contradicting each other in real time, on camera. Not in the usual way where you get different framings from different ministries over the course of a day. In the room, during the same event. And the effect wasn't that people got better information by triangulating the contradictions. The effect was that the press conference itself became the story, and the underlying operational reality it was supposed to communicate got completely buried.
Which is almost a perfect inversion of how official communication is supposed to function. The point of a press conference is to reduce uncertainty. This one manufactured it.
I don't think that was entirely accidental. There's a strategic logic to controlled ambiguity in wartime. You don't want your adversary to know what you know or what you're planning. But the collateral damage is that your own population stops treating official channels as worth engaging with. You can't surgically target your adversary's epistemic environment without also degrading your own.
World War Two is the obvious counterpoint people reach for here, and it's worth actually examining rather than just invoking. The propaganda apparatus in both Britain and the United States was centralized, deliberate, and broadly effective. People knew they were getting a managed version of events. There was a kind of acknowledged social contract around it. The government is going to tell you what it wants you to know, and you're going to support the war effort, and the full picture will emerge later. That contract held, more or less.
What's different now isn't that governments lie or manage information. They've always done that. What's different is that the alternative is no longer silence or samizdat. The alternative is an infinite, undifferentiated torrent of competing claims with no institutional filter and no shared framework for adjudication. In 1944 you couldn't fact-check a Movietone newsreel in real time. Now you can, except the fact-check is itself of uncertain provenance, and so is the fact-check of the fact-check.
Infinite regress as a feature of the modern information environment. That's not a comforting observation.
It isn't. And the DIY intelligence communities that have sprung up around this conflict, the Telegram channels, the Discord servers where people are pooling satellite imagery and flight radar data, they're a rational response to that regress. If no single source is trustworthy, you try to aggregate across many sources and look for convergence. The methodology is actually sound in principle.
The execution is where it gets complicated.
The execution is where it falls apart, yes. Because those communities have their own dynamics. They have priors. They have political investments in particular outcomes. The aggregation process isn't neutral. And the social reward inside those communities goes to the person who finds the confirming data point, not the person who introduces the disconfirming one.
You've rebuilt, in miniature, exactly the incentive structure you were trying to escape.
Which is why I think this conflict is a prototype rather than just an extreme version of something familiar. Every future conflict with this combination of factors, one party with a hard blackout, one party with a fragmented media ecosystem, a globally connected observer class with OSINT tools and no editorial accountability, is going to look like this. The question isn't whether information chaos becomes the norm. It's whether societies develop the institutional immune responses to function inside it.
Right now, those immune responses don't exist at any meaningful scale. We have individual analysts doing rigorous work. We have a handful of organizations with serious verification standards. But the ratio of signal to noise in the broader ecosystem isn't moving in the right direction—which raises the question:
What do we actually do with that? Because I don't want to leave people sitting with "the ratio is getting worse and nothing exists to fix it" as the takeaway. That's accurate but it's not useful.
And there are concrete things that help, even if they don't solve the structural problem. The first is being discriminating about OSINT rather than just consuming it enthusiastically. The analysts worth following have a track record you can check. They correct themselves publicly when they're wrong. They hedge when they're uncertain. They cite primary sources rather than citing other aggregators. If someone's thread never contains the word "unconfirmed," that's a signal, not a virtue.
The Iran Monitor dashboard and the Critical Threats evening reports are examples of that discipline done well. They're not infallible, but they show their work. And showing your work is actually the minimum bar. If you can't see the methodology, you can't assess the claim.
The second thing, and this is less about tools and more about posture, is what I'd call information humility. Which sounds obvious but is hard to practice under the conditions we're describing. When you're living inside an information vacuum and something arrives that looks like signal, the temptation is to grab it. The psychological relief of having an explanation is real. Resisting that, sitting with the uncertainty rather than resolving it prematurely with a confident but unverified claim, that's a discipline.
The amplification piece matters more than people think. Sharing an unverified claim isn't neutral. It adds velocity. The debunking of the April fifteenth troop movement story came hours after the original had already shaped coverage. Every retweet in that window made the correction harder to land.
The practical version is: before you share something, ask whether it's confirmed rather than just whether it's interesting. And support the organizations that are doing verification under resource pressure. Independent journalism is not having a great moment financially, and the verification infrastructure is part of what's degrading.
Pressure platforms for transparency too, even if it feels futile. The algorithmic incentives are not a law of nature. They're design choices, and design choices can be changed if enough people make it uncomfortable not to change them.
None of this fixes the ratio. But it changes your individual contribution to it—which is the only lever most people actually have. Even if it's smaller than we'd like.
Smaller, and maybe shrinking. Which makes me wonder: future historians looking at this period, not just this conflict but this whole stretch of years—are they going to see this as the moment when the ability to agree on basic facts effectively ended?
I think about this a lot. And the honest answer is I don't know, and I'm not sure anyone does. What I can say is that the conditions for a genuine epistemic rupture are present in a way they haven't been before. The combination of AI-generated content at scale, fragmented distribution, collapsed institutional authority, and a conflict that's actively stressing all three simultaneously... that's not a drill. That's the real thing.
The AI piece is the part that keeps me up at night. Or it would, if I weren't asleep most of the time. We've been talking about human-generated misinformation running faster than human-generated correction. Now imagine that asymmetry with the generation side running at machine speed.
The fabricated troop movement claim on April fifteenth traveled for hours before the correction landed. That was a human making a bad call and other humans amplifying it. A synthetic video of an official making a statement that never happened, generated in minutes, distributed before anyone has confirmed it's fake... the correction window collapses to near zero. And the psychological impact of a synthetic video is not the same as a misread satellite image. It's visceral. It's harder to dislodge.
We may be at the last moment where the information chaos is still, at least in principle, navigable. Where the signal exists and the problem is finding it. What comes after might be a period where the question isn't which source to trust but whether any unverified piece of content can be trusted at all.
Which is a different kind of problem. And not one I have a tidy answer for.
That might be the most honest thing we've said all episode.
Thanks to Hilbert Flumingtop for producing, and to Modal for keeping the infrastructure running so we can have conversations like this one. This has been My Weird Prompts. If you want to find the back catalog, myweirdprompts.com has all two thousand two hundred and sixty-two episodes. We'll see you next time.