Daniel sent us this one, and it's got a few layers. Iran is claiming through state media that it has reverse-engineered American Jericho ICBMs. Iranian media is also reporting that the IRGC has downed over a hundred and seventy drones during what they're calling the Ramadan War, including multiple Hermes 900s, at least one captured intact. Daniel wants to know what technologies actually exist to prevent adversaries from exploiting downed hardware, how those protections hold up when the device has no network connection, and whether Iran's ICBM reverse-engineering claim is even plausible. Anti-tamper chips, zeroization, thermite, dead-man switches, geofence triggers, the whole stack.
There's a lot to untangle here, because the claim itself has a problem baked into it right from the start.
Right, and that problem is fairly fundamental. Jericho is Israeli. It's not American. The Jericho three is an Israeli-developed ballistic missile, and there's no credible evidence connecting Iran to any reverse-engineering program targeting it. The CSIS Missile Defense Project is pretty clear on the lineage of that system. So when Iranian state media says "American Jericho ICBMs," that's already two errors in four words.
Which tells you something. Either whoever wrote the press release didn't know what they were talking about, or they knew exactly what they were doing and assumed no one would check. Both of those are interesting in different ways.
By the way, today's script is being written by Claude Sonnet four point six, which feels appropriate for an episode about systems that know when to erase themselves.
I appreciate that framing. Okay, so the propaganda question and the technical question are actually separable here, and I think we should treat them that way. Because even if the specific claim about Jericho is nonsense, the underlying question, what happens when sophisticated military hardware ends up in adversary hands, is genuinely serious. Iran captured an RQ-170 Sentinel in 2011. They've been claiming reverse-engineering milestones off that platform for over a decade now. The IRGC's assertion that they brought down a Hermes 900 intact is more recent and, frankly, more credible in at least the narrow sense that drone recovery is more plausible than ICBM recovery.
The gap between "we shot it down" and "we reverse-engineered it" is enormous, though. And that gap is precisely where anti-tamper engineering lives. So let's actually get into the stack, because I think most people's mental model of this comes from action movies, and the reality is both more mundane and more interesting.
Much more interesting. And the first thing to understand is that the goal of anti-tamper engineering is not to make reverse-engineering impossible. That's a misconception worth killing early. Against a patient nation-state adversary with months of time, a full laboratory, and unlimited budget, physical recovery almost always yields something. The goal is to raise the cost and delay the timeline, ideally past the point of operational relevance.
Which is a very different design objective than "this will never be cracked." It's more like, by the time you figure out what this chip does, the system it was defending has been replaced.
And that framing shapes every design decision in the field. So let's start at the cryptographic layer, because that's where the most mature standards exist. FIPS 140-3, the Federal Information Processing Standard for cryptographic modules, mandates specific behaviors for how sensitive hardware handles keys, handles tamper events, handles power loss. A compliant module has to be able to detect physical intrusion and respond to it, typically by zeroing out key material. That process is called zeroization.
Zeroization being the cryptographic equivalent of eating your homework before the teacher can grade it.
The key insight is that the encrypted data on the device is often not the sensitive part. The keys are the sensitive part. If you can destroy the keys fast enough, the rest of the hardware becomes a very expensive paperweight. And FIPS 140-3 Level three and Level four modules are designed to do that zeroization in response to physical penetration attempts, voltage anomalies, temperature excursions, even unusual clock frequencies. The module is essentially watching itself for signs of attack.
What's the actual mechanism? Like, is this a dedicated circuit, a microcontroller, what?
It varies by implementation, but a common approach is a tamper-responsive hardware security module, an HSM, that stores key material in volatile memory, specifically SRAM, backed by a small internal battery. The moment the tamper sensors trip, the battery circuit cuts, the SRAM loses power, and the keys are gone in microseconds. Some implementations also have active tamper meshes, essentially a fine grid of conductive traces embedded in the device packaging. If you drill into it, grind it, or try to delayer it chemically, you break a trace, the circuit detects the change in resistance, and zeroization fires.
The attacker has to do all of this without triggering any of those sensors. Which sounds straightforward until you realize the sensors are specifically calibrated for the kinds of attacks you'd use.
The people designing the tamper response know the attack playbook. Focused ion beam, scanning electron microscopy, chemical decapsulation, low-temperature attacks to slow the zeroization response. So the defense has to be layered precisely because each individual countermeasure has a known bypass. The mesh can potentially be mapped if you're patient enough. The temperature sensor has a threshold. But stacking them means an attacker has to simultaneously defeat multiple independent systems without triggering any of them.
This is where the nation-state question gets uncomfortable, because a patient adversary with a full semiconductor lab can map a tamper mesh. It's hard, it's expensive, it takes time, but it's not magic.
It's not magic at all. The Stuxnet saga made this uncomfortably clear, actually, not as a hardware attack but as a demonstration of how deeply protected systems can be understood if you invest enough. The lesson from that era was that even highly protected industrial control systems leaked information through side channels, through timing, through power consumption, through electromagnetic emissions. The same is true for military hardware. The goal is always delay and cost, not absolute secrecy.
Let's talk about the firmware side of this, because the cryptographic module is one layer, but the actual operational software is another problem entirely.
This is where secure boot and signed firmware become critical. The basic idea is that every piece of code the device runs has to be cryptographically signed by a trusted authority, and the hardware verifies that signature before executing anything. If an adversary recovers a drone, pulls the storage, and tries to run modified firmware, the secure boot chain rejects it. You can't just swap in your own code and start the system.
That protects against running the system. It doesn't necessarily protect against reading the firmware.
And this is where obfuscation of custom silicon matters. If your processing is happening on a custom Application Specific Integrated Circuit, an ASIC, that was designed specifically for this platform, the adversary doesn't have a datasheet. They don't have reference designs. They're starting from scratch trying to understand what the chip even does, which requires destructive analysis that itself risks triggering tamper responses.
Field Programmable Gate Arrays, FPGAs, are interesting here too, because the configuration bitstream can be encrypted. So even if you physically extract the FPGA, you get an encrypted blob that's useless without the key that was just zeroed out.
That's a real protection layer, and it's used in exactly this context. Xilinx and Intel both have FPGA families with bitstream encryption and authentication specifically marketed at defense applications. The configuration memory can be made volatile as well, so power loss, which happens when a drone crashes, is itself a partial zeroization event.
Which brings us to the connectivity question, because Daniel specifically asked about what happens when the network is gone at the moment of loss. Cerberus-style remote wipe assumes you can reach the device. A crashed drone in Iranian territory obviously can't receive a wipe command.
The autonomous countermeasures become everything. Dead-man switches are the canonical solution here. The device maintains a heartbeat with some trusted system, and if that heartbeat is interrupted for longer than a defined threshold, the device initiates its own destruction sequence. The challenge is calibrating the threshold. Too short and you get false positives from legitimate signal loss. Too long and an adversary has a window.
A crashed drone has already lost its heartbeat, presumably, so ideally the self-destruct fires before anyone gets to it.
In practice it's more complicated. The drone's own power systems may be damaged in the crash. The tamper response requires power to execute. This is why some implementations use capacitor banks, essentially a small internal power reserve specifically to ensure the zeroization circuit can fire even if the main power bus is severed. You want the last thing the device does, before it completely loses power, to be destroying its key material.
What about geofencing? Because that's a different trigger mechanism entirely.
Geofencing is elegant in concept. The device knows its operational area, and if GPS puts it outside that boundary, it treats that as a tamper event. NATO-operated drone programs have used geofencing as part of their autonomous response architecture. The problem is GPS spoofing. An adversary who can fake a GPS signal can potentially convince the device it's still inside the geofence, buying time before the self-destruct triggers. So geofencing works well as one layer but not as the only layer.
GPS denial is something Iran has specifically invested in. They've claimed GPS spoofing as part of how they brought down the RQ-170 in 2011, which, whether or not that's the full story, suggests they've thought about this attack vector.
The official U.account of the RQ-170 incident is, let's say, carefully vague. Iran's claim was that they spoofed the GPS signal and essentially landed the aircraft themselves. Independent analysts have been skeptical of the full claim but acknowledged that GPS vulnerabilities were likely a factor. What's interesting is what happened afterward, because Iran has spent years claiming progressive reverse-engineering of that platform. By around 2025 they were asserting they'd extracted meaningful technology from the RQ-170, though Western analysts dispute how much of the actual cryptographic and sensing payload survived the landing and any subsequent tamper responses.
Which is the crux of it, really. Even if you recover the airframe intact, the question is whether the sensitive payload is still intact. And a well-designed system should ensure the answer is no.
The uncomfortable reality is that "should" and "does" have a gap, and that gap is where adversary intelligence programs live. Physical destruction is surprisingly hard to guarantee. Thermite charges are the movie answer to this, and they exist in real applications, NSA-approved secure storage devices have used thermite-based destruction mechanisms, but there are significant tradeoffs. Thermite burns at around two thousand five hundred degrees Celsius and is not subtle. In a crashed vehicle, initiating a thermite charge risks secondary fires, risks harming recovery personnel, raises legal and proportionality questions under the laws of armed conflict, and may not reliably destroy specific components if the charge placement is off.
Thermite is real but not the universal solution it's portrayed as.
It's a last resort in specific applications. The more common approach for airborne platforms is to rely on the cryptographic and electronic countermeasures, the zeroization, the key destruction, the secure enclave wipe, and accept that the physical hardware may be recoverable while the sensitive information is not. The structural design of the platform can also contribute. Composite materials used in stealth aircraft don't yield meaningful information about their exact composition from physical examination the way metal alloys might. Custom ASICs without documentation are hard to reverse-engineer even when physically intact.
Let's pull on the Iran thread a bit more specifically, because the Hermes claim is more recent and more interesting to me than the ICBM claim, which is, as we established, confused from the start.
The IRGC's claim about downing over a hundred and seventy drones during the Ramadan War, including Hermes 900s, is notable. Press TV was reporting on this as recently as a few days ago. The Hermes 900 is an Elbit Systems platform, Israeli-made, sophisticated, used by multiple countries including Israel and several NATO members. If Iran has recovered one intact, that's a meaningful intelligence opportunity, not because they can necessarily decrypt the payload, but because the airframe design, the sensor mounting geometry, the RF signature characteristics, the materials, all of that is potentially useful even without cracking the cryptographic layer.
There are two different things an adversary gets from a recovered platform. The classified secrets, which the anti-tamper stack is specifically designed to deny, and the engineering intelligence, which is much harder to protect against because it's inherent to the physical object.
That's exactly the distinction that often gets lost in these discussions. You can zero out every cryptographic key, destroy every piece of firmware, and an adversary can still learn from the radar cross-section of the airframe, the thermal signature of the engine, the materials used in the wing construction. That's why stealth programs go to such lengths with composite layups and low-observable shaping, because even a recovered airframe tells you something about the design philosophy.
The Jericho claim is interesting precisely because it's so implausible. Jericho three is a road-mobile intercontinental ballistic missile. Iran doesn't have one. Iran hasn't shot one down. The claim doesn't even have a coherent mechanism. It reads as pure propaganda, and probably bad propaganda at that, because anyone who checks the sourcing immediately finds the "American" framing is wrong.
The FAS Nuclear Information Project's work on Iran's ballistic missile program is pretty comprehensive on what Iran has actually reverse-engineered. They've had success with Soviet Scud variants. There's evidence of reverse-engineering from the North Korean No-dong lineage. They've worked on the Northrop Grumman Scout rocket. But Jericho, there's nothing there. The claim doesn't map onto any known Iranian recovery event.
Which raises the question of who the audience is. Because this isn't aimed at Western analysts who will immediately check. It's aimed at a domestic audience and a regional audience where the specific technical details are less important than the general message, which is "we are matching the enemy.
That's the propaganda function. And it's worth noting that this is a pattern with Iranian military claims going back decades. The claims tend to be structurally unfalsifiable in ways that serve domestic consumption. "We reverse-engineered it" is hard to disprove because you'd have to show what they actually have, which requires intelligence access. So the claim floats.
Though the RQ-170 case is instructive, because Iran did recover that airframe, did show it on state television, and Western analysts could at least partially assess what they had. The assessment was that the airframe was largely intact but that the sensitive payload systems had either been destroyed or were not recoverable in functional form.
Iran's subsequent "reverse-engineered" versions of the RQ-170 were, to put it charitably, approximate. The publicly displayed copies had geometry that didn't match the original, which suggests they were working from photographs and physical measurements of the exterior rather than from any understanding of the internal systems.
Which is actually a decent illustration of what anti-tamper engineering achieves in practice. You can't stop someone from copying the shape of the airframe. You can stop them from understanding why that shape works, what the internal systems do, and how to replicate the actual capability.
That's a good way to frame the achievable goal. The physical object is always at some risk. The functional knowledge is what you're protecting. And the layered cryptographic and electronic countermeasures are specifically designed to ensure that recovering the object doesn't translate into recovering the knowledge.
Cold War context is relevant here too, because this is not a new problem. The Soviets were extremely systematic about exploiting captured Western hardware during Vietnam. They had dedicated programs for recovering downed American aircraft and extracting whatever they could. And the U.was aware of this and designed accordingly.
The Vietnam-era exploitation programs are fascinating from a historical standpoint. The Soviets recovered AIM-9 Sidewinder missiles, recovered F-105 avionics, recovered pieces of early stealth technology. Some of what they learned accelerated their own programs. The lesson absorbed by U.defense planners was that hardware would be lost in any sustained conflict and the design philosophy had to account for that. You design for the assumption of loss, not the hope of recovery.
The modern version of that philosophy is the layered anti-tamper stack we've been describing. Assume the airframe gets recovered. Ensure the knowledge doesn't.
With the honest caveat that against a nation-state with enough time and resources, some knowledge will leak. The question is how much, how fast, and whether it's still operationally relevant by the time they've extracted it. A drone's electronic warfare suite that takes three years to partially reverse-engineer is probably running updated software by then anyway.
That's a surprisingly reassuring thought. Right, let's get into the autonomous countermeasures in more depth, because I think the dead-man switch architecture is underappreciated and the geofencing piece has some interesting wrinkles we haven't fully pulled on yet.
Then we should come back to the signed firmware and secure boot chain, because there's a specific mechanism there around how you prevent an adversary from even booting the recovered system that I think is worth spelling out carefully.
Let's do it.
The Iranian ICBM claim is worth sitting with for a moment, because it's not just wrong, it's wrong in a specific way that tells you something about how these claims are constructed.
The Jericho series, Jericho one, two, three, those are Israeli ballistic missiles. Developed in Israel, deployed by Israel, not American systems in any meaningful sense. The CSIS Missile Defense Project's documentation on Jericho three is pretty clear on the lineage. So the framing of "American Jerichos" is confused from the first word. Iran doesn't have one. There's no recovery event that could have initiated a reverse-engineering program.
Yet the claim circulates. Which tells you the intended audience isn't people who will check. The domestic messaging is "we are matching and surpassing the enemy," and the specific technical accuracy is incidental to that goal.
The FAS Nuclear Information Project tracks what Iran has actually reverse-engineered. Scud variants, North Korean No-dong derivatives, some work on the Northrop Grumman Scout rocket. That's a real and non-trivial program. But Jericho, there's nothing. No recovery event, no plausible acquisition pathway.
The propaganda claim is almost entirely decoupled from the actual capability. Which raises the harder question: when Iran makes claims that do have a physical basis, like the Hermes drone recoveries, how much do we credit those?
That's where it gets complicated. The IRGC claim of downing over a hundred and seventy drones during the Ramadan War, including Hermes 900s, that's a different category of claim. Press TV was running detailed reporting on this just days ago. There's at least a physical event underlying it. The question is what they can actually extract from what they've recovered, and that's exactly where the anti-tamper engineering question becomes real rather than hypothetical.
Which is where we're headed.
Let's actually open the hood on the anti-tamper stack, because I think people have a vague sense that "there are protections" without understanding what the layers actually are and what each one is doing.
Which ones are real versus which ones are from the movies.
Right, thermite being the obvious example of something that exists but is much more constrained in practice than the fiction suggests. Start from the inside out. The most fundamental layer is the cryptographic key material. Any sensitive system, communications encryption, navigation data, mission payload processing, the keys that protect all of that are stored in hardware security modules, HSMs, purpose-built chips designed specifically so that the key material never exists in a readable form outside the secure enclave. FIPS one hundred and forty, the standard the National Institute of Standards and Technology maintains for cryptographic modules, mandates specific tamper-response behaviors. The current version, FIPS one forty-three, requires that a module detect physical intrusion and zeroize, meaning overwrite with zeros, all sensitive data before an attacker can read it.
Zeroization is fast in a way that physical destruction isn't.
The detection can be triggered by voltage changes, temperature excursions, light exposure if the enclosure is breached, pressure sensors if the casing is opened. The key material is gone before a human hand can reach it. That's the design goal. What you're left with is a chip that contains the processing logic but none of the secrets, which is substantially less useful to an adversary.
Though the processing logic itself isn't nothing.
No, and this is where obfuscated custom silicon matters. Military systems increasingly use application-specific integrated circuits, ASICs, that are designed in-house or under classified programs. There's no publicly available datasheet. There's no reference design an adversary can compare against. Reverse-engineering a custom ASIC, even a physically recovered one, requires electron microscopy, layer-by-layer delayering, months of work by a team of specialists, and you still end up with a netlist that tells you what the logic does without necessarily telling you why, what the protocol it implements is, or what the broader system architecture looks like.
The ASIC obfuscation is raising the cost and the timeline, not creating an absolute barrier.
That's the honest framing for every layer in this stack. You're buying time and raising cost. You're not creating an impenetrable wall. The goal is that by the time an adversary has extracted something meaningful, the operational relevance of that information has degraded. Software has been updated, keys have been rotated, the vulnerability they've found has been patched.
What about the firmware layer? Because this is where the Hermes drone question gets interesting to me. The airframe might be recoverable but can you actually boot the thing?
Signed firmware and secure boot chains are specifically designed to prevent that. The way it works is that the bootloader, the first code that runs when the system powers on, will only load a subsequent stage if it carries a valid cryptographic signature from the manufacturer. The signing key is held by Elbit, or Lockheed, or whoever built the platform. An adversary who recovers the hardware can't generate a valid signature because they don't have the private key. So even if they replace the storage media, even if they try to load their own operating system, the secure boot chain rejects it and the system won't operate.
Which means the recovered drone is more useful as a physical artifact than as a functional system.
You can measure it, photograph it, study the materials, analyze the RF characteristics of the antennas. You cannot fly it. You cannot access the mission data in any meaningful way because the decryption keys are gone via zeroization and you can't run your own code because secure boot rejects it.
The RQ-4 Global Hawk case from twenty nineteen is worth pulling in here. Iran shot one down over the Strait of Hormuz, claimed it was in Iranian airspace, the U.said it was in international airspace. Regardless of the territorial dispute, the hardware went down and Iran had access to it.
The RQ-4 is a different class of platform from a drone like the Hermes. It's a high-altitude long-endurance reconnaissance aircraft, enormous, sophisticated, expensive, one of the most capable ISR platforms the U.What made that incident interesting from an anti-tamper standpoint is that it was shot down, not crashed due to malfunction. A missile intercept means the destruction is external and unpredictable. You can't guarantee which components survived intact.
The self-protection mechanisms have to assume they might be triggered in a damaged, partially functional state.
This is the thermite tradeoff you mentioned earlier. Thermite burns at around two thousand five hundred degrees Celsius. NSA-approved secure storage devices, certain classified key management hardware, have used thermite-based destruction mechanisms. But thermite in a platform that's already been hit by a missile, in an airframe that may be tumbling, over water or populated territory, the initiation reliability drops and the collateral risk goes up. The more operationally realistic solution is that the cryptographic protections are doing the heavy lifting and the physical destruction, if it happens at all, is targeted at specific high-value components rather than the whole platform.
The movie version where the whole thing explodes dramatically is less accurate than a version where a small charge quietly destroys a chip the size of your thumbnail.
Much less accurate. And in many cases, the "destruction" is purely electronic. No charge, no explosion. Just a capacitor that discharges through the key storage medium the moment tamper conditions are detected. The physical hardware survives. The secrets don't.
Which is actually more elegant and harder to defeat, because there's nothing dramatic to observe or interrupt.
The challenge with any autonomous destruction mechanism is the false positive problem. You need the system to be sensitive enough to trigger under genuine tamper conditions but not so sensitive that it wipes itself when a maintenance technician opens the housing for a legitimate inspection. FIPS one forty-three has specific requirements around this, authentication requirements before certain enclosure accesses, audit logging, tamper-evident seals that record intrusion without necessarily triggering full zeroization for lower-sensitivity access events.
Tamper-evident seals are interesting because they're essentially forensic. They don't prevent anything, they just tell you it happened.
That's a real and underappreciated layer. TamperTech and similar companies produce seals with forensic taggants, microprinting, void patterns that can't be replicated with available materials. The value isn't physical security, it's chain of custody. If a recovered platform shows evidence of tampering before your intelligence team gets to it, that tells you something about what the adversary may have accessed. It's information about information access.
The Hermes drone firmware self-destruct question is one I want to push on, because Elbit is an Israeli company and the Israeli military's approach to this is not identical to the U.There are some differences in how aggressively autonomous countermeasures are implemented.
Israeli doctrine on this has historically been somewhat more willing to accept aggressive autonomous responses, partly because the threat environment is more immediate and the legal framework around proportionality is interpreted differently in some operational contexts. Whether the specific Hermes 900 units operating in the Ramadan War theater had active self-destruct mechanisms beyond the standard cryptographic layer, I don't know. That's not public information. What I'd expect is the secure boot chain, the zeroization on tamper detection, and probably some degree of mission data encryption with keys that are either destroyed on loss of authenticated connection or were never stored on the platform at all.
That last point is interesting. Keys that were never on the platform.
Ephemeral key architectures. The platform receives session keys for a specific mission, those keys expire or are invalidated when the mission ends or when authenticated communication with the ground station is lost. There's nothing to zeroize because there's nothing persistent to begin with. The recovered hardware has the processing capability but no access to the cryptographic material that would make the data meaningful. It's a fundamentally different threat model from one where you're trying to destroy stored secrets before an adversary reaches them.
The dead-man switch version of this isn't a charge, it's a timeout.
A timeout tied to authenticated communication. Lose the handshake, lose the keys. Which also happens to be robust against GPS spoofing or jamming, because the authentication is happening over an encrypted datalink, not a position assertion. An adversary who jams GPS doesn't automatically trigger key loss, but an adversary who shoots down the platform and severs the authenticated link does.
That's a more sophisticated architecture than I think most people assume these systems have.
The investment in trusted computing for military platforms has been substantial over the last fifteen years. DARPA's Trusted Integrated Circuits program, the NSA's Commercial Solutions for Classified program, the push toward FIPS one forty-three compliance across all DoD cryptographic modules. This is a field that's been taken seriously at the engineering level even when the public discussion tends toward the dramatic end of the spectrum.
Okay, the autonomous countermeasures question is where I want to go next, because we've established what the hardware does when it's functioning and connected. The real challenge is the disconnected case—that's where things get messy.
And that's the harder engineering problem. When the platform has network access, you have options. You can push a remote wipe, you can invalidate session keys server-side, you can monitor for anomalous behavior. The moment that link is severed, everything has to be autonomous and pre-programmed, which means you're making decisions about adversary behavior before you know what the adversary is actually going to do.
Which is a design problem that looks easy until you enumerate the edge cases.
The geofence trigger is a good example of both the power and the limitation. The concept is straightforward: the platform knows its authorized operational area, and if it finds itself outside that boundary, it initiates some protective response. That could be key zeroization, it could be a return-to-base command, it could be a more aggressive countermeasure. NATO-operated drone platforms have used geofencing as a layer of this kind. The problem is that geofencing relies on the platform knowing where it is, which means it relies on GPS or equivalent positioning.
GPS is spoofable.
Spoofable and jammable. An adversary who can degrade the positioning signal can potentially manipulate what the platform believes about its own location. The sophisticated implementations cross-reference multiple positioning sources, inertial navigation, terrain matching, star trackers on certain high-altitude platforms. But each additional source is additional complexity and additional potential failure mode. The simpler and more robust trigger is the authenticated communication timeout we talked about, because that doesn't depend on the platform's self-reported position.
Dead-man switch as a pure communication check.
The ground station isn't just sending commands, it's continuously asserting its identity to the platform via cryptographic handshake. If that assertion stops, the platform treats it as a loss-of-control event regardless of whether it still has GPS lock, regardless of whether it's still airborne. That's a harder thing for an adversary to fake because they'd need the ground station's private key to generate a valid handshake continuation.
The tamper-evident enclosure side of this is interesting because it operates on a completely different timescale. Zeroization is microseconds. A tamper-evident seal is... It's forensic, not preventive.
The layered approach is the whole point. Tamper-evident seals from companies like TamperTech use forensic taggants, microprinting, void patterns that activate when the seal is disturbed. They're not stopping anyone. What they're doing is giving your intelligence community information about what an adversary accessed and when. If a recovered platform shows seal disturbance consistent with a specific date, that anchors a timeline. It tells you what the adversary might have learned and when they learned it, which informs what you need to rotate or update in response.
Even the forensic layer has operational value, just downstream.
It's damage assessment infrastructure. You're not preventing the breach, you're characterizing it precisely enough to respond correctly. That's underappreciated in public discussions that tend to focus on the dramatic prevention side.
The Cold War comparison is worth making here, because the Soviets were doing this systematically and we have declassified records of how well it worked.
Vietnam is the most documented case. The Soviets had access to substantial quantities of U.military hardware through the North Vietnamese. Crashed F-105s, F-4s, AIM-7 Sparrow missiles that were recovered intact or nearly intact. Soviet technical teams were on the ground, and they extracted real information. The AIM-7 recovery is probably the most significant. They got working examples, they reverse-engineered the radar seeker, and that fed directly into improvements in Soviet air-to-air missile design.
Physical recovery in that era translated into functional intelligence.
Because the protections weren't there. Seventies-era avionics weren't designed with adversary recovery as a threat model in the same way. The cryptographic protections were much weaker, the key storage wasn't tamper-responsive, there was no secure boot chain. What you had was hardware that, if you recovered it, you could largely understand by examining it. The delta between then and now is enormous.
Though the Soviet program also showed that even when you extract functional intelligence, operationalizing it takes time. They got the AIM-7 seeker. It still took years to produce a comparable system.
The absorption gap is real. Understanding a technology and being able to manufacture it are different problems. Iran faces this acutely. Even if they extract meaningful information from a recovered Hermes 900, producing a comparable sensor package requires materials, fabrication facilities, supply chains, and engineering talent that their sanctions environment makes very difficult to assemble. The intelligence value of the hardware might be high but the manufacturing pathway is constrained.
Which brings us back to the ICBM claim, because that's where the gap becomes absurd. The Jericho series is Israeli-developed. Jericho three is a three-stage solid-fuel ballistic missile with a range in the thousands of kilometers. Iran has no recovery event. There's no crashed Jericho sitting in a warehouse in Tehran. The claim isn't just exaggerated, it's missing the foundational premise that would make any reverse-engineering possible.
The CSIS Missile Defense Project documentation on Jericho three is pretty clear on the development lineage. This is a system built on Israeli indigenous capability developed over decades, with American assistance in early stages but not a system Iran has ever had physical access to. What Iran has reverse-engineered are things they actually recovered: variants of Soviet Scud designs, the Northrop Grumman Scout rocket in earlier decades. Those are real programs with real physical starting points.
The Jericho claim is propaganda that doesn't even bother to construct a plausible mechanism.
Which is actually informative. When a state actor makes a claim this disconnected from physical reality, it's not aimed at technical audiences. It's aimed at a domestic audience that won't check the premise and an international audience where the headline has more reach than the correction. The goal is the assertion, not the argument.
It muddies the water around claims that do have physical basis. The Hermes drone recoveries are real. The IRGC claiming over a hundred and seventy drones downed during the Ramadan War, including intact Hermes nine hundreds, that's a claim with physical evidence behind it. The question is what they can actually extract, not whether they have the hardware.
Right, and conflating the plausible claim with the implausible one is strategically useful if you want to either inflate your adversary's capabilities or deflate them wholesale. The honest analysis has to separate them. Physical recovery happened. Functional intelligence extraction is limited by the protections we've been describing. Strategic capability replication is limited further by manufacturing constraints.
The nation-state adversary question is where I want to be honest about the limits of these protections, though. Because everything we've described raises cost and extends timelines. Against a patient adversary with unlimited budget and years to work, the calculus changes.
It does, and that's the frank answer. The goal of anti-tamper engineering has never been absolute prevention. The NSA's own framing on this, going back to the trusted computing work from the early two thousands, is cost imposition and time delay. If breaking your system takes three years and fifty million dollars in specialized equipment and expertise, and the operational relevance of the information degrades in eighteen months because you've rotated your keys and updated your software, you've won. Not because they couldn't break it, but because breaking it didn't give them anything actionable.
The Stuxnet era taught everyone that even highly protected systems leak information in unexpected ways. Side channels, timing attacks, physical emanations. The protections address the obvious attack vectors. Nation-states look for the non-obvious ones.
They find them, eventually. The honest position is that anti-tamper engineering is a cost-benefit race, not a solved problem. The engineering community knows this. The public discussion often doesn't reflect it because "we made it harder and more expensive" is a less compelling headline than "our systems are unbreakable," which is never true. So if we're being practical—
—what's the takeaway then? Not for a defense engineer, but for someone just trying to think clearly about this space.
The core insight is that layering is the whole game. No single mechanism is sufficient. Zeroization alone fails if the tamper sensor doesn't fire. Signed firmware alone fails if the signing key is compromised. Geofencing alone fails if GPS is spoofed. The systems that hold up against serious adversaries are the ones where breaking any single layer still leaves you facing two more, and each additional layer multiplies the time and cost required.
Which is a design philosophy, not a product.
It's a systems engineering discipline. And the implication for procurement and oversight is that you can't evaluate anti-tamper protection by checking a single box. FIPS one forty-three compliance matters. Secure boot matters. Tamper-responsive key storage matters. But the question that actually needs answering is whether those layers are coherently integrated, because a system with five excellent individual protections that don't talk to each other correctly can still fail at the seams.
The oversight angle is where I think listeners who aren't defense engineers can actually apply pressure. Because the decisions about what protection standards are required for exported hardware, what gets licensed for sale to which partners, what R and D security protocols cover sensitive manufacturing processes, those are policy decisions with real lobbying pressure on them from the commercial side.
The export control piece is consequential. When a system is approved for export to a partner nation, the anti-tamper requirements in that export version sometimes differ from what the domestic military version carries. That's a real gap. And the intelligence community has flagged it repeatedly. Hardware that ends up in a conflict zone at a lower protection tier than its domestic equivalent is a meaningful risk.
The open-source intelligence dimension of this is also worth naming, because the Jericho claim is a good example of how quickly bad claims can be stress-tested publicly now. The CSIS missile threat database, the FAS nuclear information project documentation, these are publicly accessible and detailed enough that the basic premise of a claim can be evaluated without a security clearance.
OSINT has changed the epistemics around these claims. Satellite imagery, flight tracking data, technical literature that's in the public domain. The Jericho claim doesn't survive five minutes against the CSIS documentation on the system's development lineage. That kind of public accountability didn't exist at the same fidelity twenty years ago.
The practical ask for listeners is: when a state actor makes a hardware exploitation claim, apply the same pressure you'd apply to any technical claim. What's the physical recovery event? What's the manufacturing pathway? What are independent analysts saying? Because the headline is never the analysis—and that's where Herman can pick up.
And that's a discipline that applies well beyond military hardware. The instinct to take the headline at face value, to assume the claim construction is honest, that's what propaganda depends on.
The arms race framing is probably the most accurate way to leave people with this. Every protection that gets engineered, someone is working on a counter. Every counter that gets developed, the protection layer adapts. It's not a problem that gets solved. It's a problem that gets managed, expensively, continuously, by people who have to be right every time while the adversary only has to find one gap.
The honest open question is whether the current generation of protections is keeping pace with the current generation of adversary capability. Nation-states have access to electron microscopes, focused ion beam systems, and the kind of patient institutional effort that individual exploits don't require. The cost-imposition model works when your protections are expensive enough to exceed the intelligence value of what's inside. That calculation shifts as adversary tooling gets cheaper and more capable.
Which is a unsettled question. I don't think anyone has a confident answer to it.
And that uncertainty is probably the most important thing to sit with coming out of this episode. Not the specific mechanisms, not the specific claims from Tehran, but the structural reality that this is an ongoing contest with no finish line.
Good place to end. Big thanks to Hilbert Flumingtop for producing this one, and to Modal for keeping the infrastructure running. This has been My Weird Prompts. You can find all two thousand two hundred and eighty-three episodes at myweirdprompts.We'll see you next time.