Daniel sent us this one, and honestly it's got more layers than I expected. He was reading about internet connectivity being partially restored to Iran after what's being called the longest internet blackout in recorded history. And amid all the commentary about what services were coming back online, someone noted that pornography wasn't among them, which of course it wouldn't be, since it's illegal there. But that sparked a bigger question. We tend to treat access to pornography as almost a bellwether of internet freedom in the West. Yet there's a tension there, because some of what's available depicts violence, racial stereotyping, scenarios that would be creepy or prosecutable in any other context. And Daniel's hearing murmurs that countries around the world are reconsidering the hands-off approach, tightening regulations. So he's asking, what's the actual state of play in 2026? Is it really just the authoritarian states that restrict it, or is something shifting more broadly?
Before we dive into that, quick note. DeepSeek V4 Pro is writing our script today. So if anything sounds unusually coherent, that's why.
I was going to say, the prose feels suspiciously well-structured. I assumed you'd been editing.
I resent that. But let's get into this, because the Iran situation is genuinely historic in scale. We're talking about a blackout that began in late January 2026 and has now stretched past three months. The BBC and Al Monitor have both been tracking this. It's the longest nationwide internet shutdown any country has ever imposed in the internet era.
I mean, think about what that actually means. No email, no messaging apps, no online banking, no access to international news. For a country of nearly ninety million people. That's not just an inconvenience, that's an economic amputation.
And what's fascinating is the partial restoration pattern. Al Jazeera reported on April twentieth that Iran was expanding limited access, but it's heavily tiered. Businesses and government institutions got connectivity back first. Universities got some access. But individual citizens, regular home users, they're still largely cut off, and even those who are reconnected are dealing with a heavily filtered version of the internet. Social media platforms remain blocked. News sites are restricted. And yes, pornography is completely inaccessible, which is consistent with Iranian law but takes on a different character when it's part of a broader information blockade.
Let's establish the baseline. What actually is Iran's legal framework here? Because Daniel's right that it's illegal, but I think a lot of listeners might not know the specifics.
Iran operates under Islamic law, and pornography is criminalized under the country's penal code. Production, distribution, possession, it's all prohibited. The government runs a nationwide filtering system that blocks millions of websites. They've been doing this for decades, well before the current blackout. What's changed is the comprehensiveness. Under normal circumstances, tech-savvy Iranians use VPNs and proxy servers to bypass the filters. During the blackout, even those workarounds were rendered useless because the underlying infrastructure was throttled or shut down entirely.
The blackout didn't just enforce existing law more effectively. It eliminated the entire ecosystem of circumvention that had become normalized.
And that's where Daniel's question gets really interesting. He's pointing to a tension that a lot of people in the West don't like to examine too closely. We've positioned unrestricted access to pornography as a marker of a free and open internet, but we simultaneously acknowledge that a lot of what's out there is disturbing.
Let's be specific about what we mean by disturbing. Daniel mentioned depictions of violence, even mild forms, and scenarios involving racial stereotyping. And I think anyone who's spent any time online knows exactly what he's talking about. The issue isn't just that explicit material exists, it's that the platform model incentivizes escalation. Content that's more extreme, more transgressive, gets more engagement. The algorithms don't distinguish between healthy and unhealthy attention.
There's been a lot of research on this. The normalization of choking during sex, for example, is something that researchers have traced directly to pornography consumption patterns over the past decade. What was once considered a niche practice has become so mainstream that surveys show a significant percentage of young adults now consider it a default part of sexual activity. And that's happening without any real public conversation about consent, safety, or the fact that it can cause brain damage even when done quote unquote correctly.
That's a grim detail.
Even brief pressure on the carotid arteries can cause micro-strokes or cumulative neurological damage. There's no safe way to restrict oxygen to the brain. But you wouldn't know that from the way it's depicted.
We've got this strange situation where the West says, look at Iran, they censor the internet, they block pornography, how authoritarian. And Iran says, we're protecting our citizens from moral corruption. And we in the West say, well, that's ridiculous, adults should be free to view what they want. But then we don't really grapple with the content of what's being viewed, or the fact that some of it would be criminal if it weren't being distributed as entertainment.
That's where the global picture gets complicated, because it's not actually a binary between theocratic authoritarian states that ban it and liberal democracies that permit it. There's a whole spectrum of regulatory approaches, and that spectrum has been shifting.
Walk me through it. What does the map actually look like?
Let's start with the outright bans. Beyond Iran, you've got Saudi Arabia, the UAE, Qatar, most of the Gulf states. China blocks pornography through the Great Firewall, though enforcement is inconsistent and domestic production exists. North Korea obviously. Pakistan, Bangladesh, Indonesia have varying degrees of prohibition, though Indonesia's enforcement is notoriously patchy. Most of these are what you'd expect, countries with religiously influenced legal systems or authoritarian governments.
Far, so predictable. What about the middle ground?
This is where it gets interesting. Because you've got countries that don't ban pornography outright but impose significant restrictions that would surprise a lot of Western observers. South Korea, for example, blocks pornographic websites at the ISP level. It's technically illegal to produce or distribute pornography there, though the law is selectively enforced and there's a massive underground market. Japan has a famous legal contradiction where pornography is widely available but genitals must be pixelated under obscenity laws. It's a multi-billion dollar industry that operates under a legal fiction.
The pixelation thing has always struck me as one of the more peculiar compromises in regulatory history. It's like saying, we acknowledge that this material exists and is consumed at scale, but we're going to maintain a formal pretense that it's not quite what it obviously is.
That pretense has real consequences. It shapes the entire industry. It affects what gets produced, how it's distributed, what kinds of content are economically viable. But here's the thing. The countries that are actually driving the conversation about rethinking pornography regulation aren't the ones with outright bans. They're the liberal democracies that are starting to ask whether the hands-off approach has gone too far.
Give me examples.
The UK is probably the best case study. The Online Safety Act, which passed in 2023 and has been coming into force in phases, requires platforms to prevent children from accessing pornography. It mandates age verification. And it's not just a suggestion, Ofcom, the communications regulator, has real enforcement powers. They can fine companies up to ten percent of global revenue. For a company like Pornhub's parent company, that's potentially billions.
How's that actually playing out? Because I remember age verification laws being proposed years ago and they kept getting abandoned over privacy concerns and technical feasibility.
That's exactly what happened. The UK tried this in 2019 with what was called the porn block, and it collapsed. The technical challenges were real, the privacy concerns were legitimate, and the political will wasn't there. But the Online Safety Act is different. It's broader in scope and it's actually being enforced. Ofcom has been issuing guidance, setting deadlines, and platforms are scrambling to comply. Some have simply geo-blocked the UK entirely rather than deal with the requirements.
Which is its own interesting data point. A platform choosing to exit a major market rather than implement age verification tells you something about their business model.
It tells you that their user base includes a lot of people who can't or won't verify their age, and that those users are valuable enough that the compliance costs aren't worth it. But other platforms are staying and implementing verification systems. The technology has improved since 2019. There are now methods that can verify age without creating a permanent record of what someone viewed, which was the big privacy objection.
What about the rest of Europe?
France has been particularly active. In 2024, they passed a law requiring age verification for pornographic sites, and they've been aggressive about enforcement. The French regulator, Arcom, has ordered several major sites to block access for French users if they can't comply. Germany has had age verification requirements for years, though enforcement has been spotty. The European Union as a whole has been moving toward stricter regulation through the Digital Services Act, which designates very large online platforms, including Pornhub, as subject to enhanced obligations.
The EU is treating pornography platforms the same way they treat Facebook or YouTube in terms of systemic risk assessment and mitigation.
And that's a significant shift. It means these platforms have to assess the risks their services pose to minors, to vulnerable adults, to society broadly. They have to explain what they're doing to mitigate those risks. And regulators can audit those assessments. It's not censorship in the traditional sense, but it's a far cry from the anything-goes model that characterized the early internet.
What about the United States? Because I feel like we've been hearing about age verification laws at the state level, and it's been a bit of a patchwork.
The US situation is chaotic. As of early 2026, more than a dozen states have passed laws requiring age verification for pornography sites. Louisiana was the first, back in 2022. Texas, Utah, Arkansas, Virginia, Montana, North Carolina, several others have followed. The laws vary in their specifics, but the general model is the same. If you're a site with a significant portion of pornographic content, you need to verify that your users are adults.
Are these laws surviving court challenges?
Some are, some aren't. The Fifth Circuit upheld Texas's law. Other circuits have blocked similar laws on First Amendment grounds. The Supreme Court hasn't ruled definitively yet, but there's a case working its way up that could settle the constitutional question. The legal arguments are interesting. The states argue that this is like requiring age verification for buying alcohol or cigarettes online, which is well-established. The challengers argue that requiring identification to access lawful speech creates a chilling effect and that less restrictive alternatives exist, like device-level parental controls.
I have to say, the device-level controls argument has always struck me as a bit of a dodge. Not because the technology doesn't exist, but because it shifts the entire burden onto parents, many of whom are not technically sophisticated enough to implement it. It's like saying we don't need food safety regulations because people can inspect their own groceries.
That's actually a really good analogy. And it gets at something deeper. The argument for unrestricted access has historically relied on a kind of libertarian absolutism about information. Information wants to be free, censorship is always worse than whatever it's censoring, the cure of regulation is worse than the disease of harmful content. But that framework was developed in an era when the internet was text and images. When the primary concern was political speech, not algorithmic amplification of extreme content.
The scale is different. When you and I were growing up, accessing pornography meant finding a magazine or a VHS tape. It was a limited, somewhat embarrassing transaction. Now, any twelve-year-old with a phone can access more explicit content in five minutes than previous generations would encounter in a lifetime. The quantity and extremity are qualitatively different problems.
This is what researchers call the volume and velocity problem. It's not just that the content exists, it's that the platforms are designed to serve increasingly extreme content to maintain engagement. The recommendation algorithms don't have moral boundaries. They just optimize for watch time. And what maximizes watch time, it turns out, is content that pushes boundaries, that shocks, that creates a stronger dopamine response. The system doesn't care whether that's healthy for the viewer or for society.
Let's go back to Daniel's core question. Are countries actually reconsidering the all-fair-game principle? It sounds like the answer is clearly yes, but it's not a coordinated global movement. It's a bunch of different jurisdictions arriving at similar conclusions through different paths.
And the motivations differ. In the UK and Europe, the driving concern is child protection. The age verification push is explicitly framed around preventing minors from accessing harmful content. In some US states, there's a moral conservative dimension that's more explicit. In countries like South Korea, it's tied up with concerns about digital sex crimes, which have been a major political issue there after the Nth Room case and similar scandals.
The Nth Room case.
It was a massive scandal that broke in 2020. Organized rings operating on Telegram were blackmailing women and girls, including minors, into producing sexually explicit content that was then distributed in paid chat rooms. Tens of thousands of people paid to access these rooms. It was a national trauma for South Korea and it fundamentally changed the conversation about online sexual content there. It wasn't about moral panic or religious conservatism. It was about exploitation facilitated by encrypted platforms.
That gets at something Daniel hinted at. The stuff that everyone would agree should be prosecutable. The question is where you draw the line, and whether the platforms have shown themselves capable of drawing it responsibly.
There's a really important distinction here that often gets lost. When people talk about regulating pornography, critics immediately jump to the conclusion that we're talking about banning depictions of consensual sex between adults. But that's not actually what most of the current regulatory efforts are targeting. They're targeting the distribution ecosystem. The platforms that host user-generated content without meaningful verification of consent, age, or the circumstances of production.
This is the distinction between content that is produced ethically and content that is distributed without ethical guardrails. And the platforms have historically resisted taking responsibility for what's on their servers by claiming they're just intermediaries.
Section 230 in the US, the E-Commerce Directive in Europe. These legal frameworks were designed to protect platforms from liability for user-generated content. The idea was that if you held platforms liable for everything users posted, you'd never have user-generated content platforms. But the unintended consequence was that platforms had no legal incentive to police their content, and some of them exploited that to build business models around content that ranged from ethically dubious to outright criminal.
Now we're seeing the pendulum swing. The question is whether it swings too far.
That's the tension. Because you can absolutely imagine a scenario where well-intentioned regulations become tools for censorship. An authoritarian government could use age verification requirements to build databases of who's accessing what. A conservative administration could use obscenity laws to target content they don't like for political reasons. The slippery slope argument isn't entirely without merit.
The slippery slope argument also gets used to block any regulation at all. And the status quo has real victims. Women whose images are shared without consent. Children who are exposed to extreme content before they have the emotional framework to process it. Performers who are exploited or coerced. The current system has costs, and pretending otherwise because regulation might be imperfect is its own kind of ideological blindness.
I think this is where the conversation is actually maturing. Five or ten years ago, the debate was polarized between pro-censorship and anti-censorship camps, and it was largely a proxy for broader culture war dynamics. What's happening now is more nuanced. You're seeing feminist organizations and religious conservatives finding common ground on age verification and consent requirements. You're seeing tech policy experts who are staunchly pro-free-speech acknowledging that algorithmic amplification changes the equation.
Because it's not just about whether content exists. It's about whether it's being pushed to people who didn't seek it out, whether it's being optimized for maximum psychological impact, whether the business model depends on keeping people in a state of compulsive consumption.
And this connects to a broader conversation about platform regulation that goes well beyond pornography. The same algorithmic dynamics that push people toward extreme political content push them toward extreme sexual content. The same engagement optimization that makes social media addictive makes pornography platforms addictive. The underlying mechanics are identical.
What's the state of play for someone who wants to understand where their country stands? Because I think a lot of people assume, as Daniel said, that outside of authoritarian states, pornography is just legal in all forms everywhere. And that's clearly not true.
Let me give you a quick tour of the regulatory landscape as it stands in 2026. The UK has mandatory age verification in effect, enforced by Ofcom. France has similar requirements and has been ordering sites to block French users if they don't comply. Germany has age verification on the books but enforcement has been inconsistent. Australia is in the process of implementing age verification through its eSafety Commissioner. Canada has been debating similar measures. In the US, it's a state-by-state patchwork, with some states enforcing age verification and others blocked by courts. The Supreme Court will likely weigh in within the next year or two.
What about the platforms themselves? How are they responding?
Some of the major platforms have implemented age verification in jurisdictions that require it, using third-party verification services that confirm age without retaining browsing data. Others have simply geo-blocked entire states or countries. Pornhub, for example, has blocked access in several US states rather than comply with verification requirements, which is a business decision that tells you something about their user demographics and revenue model.
It tells you that they've calculated that the cost of losing verified adult users is higher than the cost of losing an entire state's market. Which implies that a significant portion of their traffic comes from people who either can't or won't verify their age.
That's the obvious inference. And it's part of why the regulatory pressure is increasing. When a platform essentially admits, through its business decisions, that age verification would significantly impact its traffic, it raises questions about who that traffic consists of.
Where does this leave Daniel's observation about hypocrisy? The idea that we tut-tut at Iran for blocking pornography while ignoring the problems with what's available in our own systems?
I think the hypocrisy charge has some merit, but it's also too simplistic. Iran's censorship isn't motivated by concerns about consent or exploitation or child protection. It's motivated by a theocratic ideology that views all sexual content as immoral. That's different from Western regulatory efforts that are trying to address specific harms while preserving access to lawful content for adults.
The effect can look similar from the outside. Blocked websites are blocked websites. And I think what Daniel's getting at is that the West's moral authority on this issue is undermined when we simultaneously claim that internet freedom is a universal value and then carve out exceptions that we don't want to examine too closely.
This is the tension at the heart of liberal internet governance. We want to say that the internet should be open and free, but we also recognize that complete openness creates spaces for exploitation and harm. And drawing the line between legitimate regulation and illegitimate censorship is difficult. Anyone who tells you it's simple isn't being honest about the trade-offs.
Let's talk about one more dimension that I think is underexplored. The racial stereotyping point Daniel raised.
That's a huge topic, and it's one that the industry has largely avoided confronting. Pornography categories are notoriously organized around racial stereotypes. The titles, the scenarios, the marketing, all of it draws on and reinforces racial tropes that would be completely unacceptable in any other media context. And because it's pornography, there's been very little public scrutiny of this.
If a mainstream film studio released content that categorized performers by race and built scenarios around racial stereotypes, there would be boycotts and op-eds and probably regulatory attention. But because this is pornography, it exists in a kind of cultural blind spot. We've tacitly agreed not to apply normal standards because it's quote unquote just fantasy.
The platforms profit from those categories. They're not neutral conduits. They're actively organizing content around racial categories because it drives engagement. The search algorithms learn that certain users respond to certain racialized content and they serve more of it. It's the same dynamic as political radicalization, applied to sexual content.
Has there been any movement on this front? Any regulatory attention to the racial dimension?
Most of the regulatory focus has been on age verification and consent. The racial stereotyping issue has been raised by academics and some advocacy groups, but it hasn't translated into policy. Part of the challenge is that it's harder to craft a regulation that addresses stereotyping without running into First Amendment or free expression concerns. You can require age verification. It's much harder to require that content not be racist.
Because that gets into content-based regulation, which is the thing that free speech law is most skeptical of.
The legal framework in the US is particularly hostile to content-based restrictions. You can regulate the time, place, and manner of speech, but regulating the content itself is subject to strict scrutiny and almost always fails unless it falls into a recognized exception like obscenity or incitement.
Racial stereotyping in pornography doesn't fit neatly into any of those exceptions.
It doesn't. Which means that even if there were political will to address it, the legal path would be extremely difficult in the US. In Europe, hate speech laws might theoretically apply, but in practice they haven't been used against pornographic content in any systematic way.
Where does all of this leave us? What's the takeaway for someone who's trying to understand the actual state of global pornography regulation?
I think there are three big trends. First, age verification is becoming the default regulatory approach in liberal democracies, and it's probably here to stay. The technology is improving, the privacy concerns are being addressed, and the political coalition behind it is broadening. Second, the platforms are being forced to take responsibility for what's on their servers in ways they never have before. The era of claiming to be a neutral intermediary is ending. And third, there's a growing recognition that the anything-goes model has real social costs, and that addressing those costs doesn't require embracing authoritarian censorship.
Iran is the counterexample that clarifies what's at stake. A three-month nationwide internet blackout is not regulation. It's information warfare against a country's own population. The fact that pornography is blocked in Iran isn't the issue. The issue is that the Iranian government has demonstrated its willingness to sever its citizens from the global internet entirely to maintain political control. That's a fundamentally different project than requiring platforms to verify that their users are adults.
I think that's the distinction Daniel was circling around. There's a difference between saying, we need to address the real harms associated with unregulated pornography distribution, and saying, we're going to control what information our citizens can access. The former is a legitimate policy debate. The latter is authoritarianism.
The challenge for liberal democracies is to pursue the former without sliding into the latter. To regulate without censoring. To protect without controlling. It's a hard line to walk, and different countries are finding different equilibria.
I think there's also a cultural dimension here that we haven't fully addressed. The reason pornography became a bellwether for internet freedom in the first place is that it's the content that's easiest to censor without political controversy. Nobody wants to be the politician defending pornography. So if a government wants to test its censorship infrastructure, or establish the principle that it can block content, pornography is the obvious starting point. It's the canary in the coal mine, not because it's the most important content, but because it's the least defended.
That's a really important point. And it's why even people who are personally uncomfortable with pornography should be wary of censorship regimes that start there. The machinery of content blocking, once established, rarely stays confined to its initial targets.
What should a listener actually do with all of this? If Daniel's question is, what's the actual state of play, we've answered that. But what's the practical implication?
I think the practical takeaway is that this is an area where the simplistic narratives on both sides are wrong. The narrative that any regulation of pornography is the first step toward authoritarianism is wrong. The evidence from the UK and Europe suggests that you can implement age verification without creating a surveillance state. But the narrative that unrestricted access is harmless is also wrong. The evidence on the effects of exposure to extreme content, particularly on minors, is substantial and growing.
For parents specifically?
For parents, the practical reality is that you can't outsource this to regulation. Even with age verification laws in place, determined teenagers will find ways around them. The most effective intervention is still conversation. Talking to your kids about what they might encounter online, helping them develop the critical framework to process it, making it clear that they can come to you with questions without shame. Regulation can reduce the ambient risk, but it can't replace parenting.
That's probably the most important thing we've said. The policy debate matters, but at the individual level, it's still about human relationships and communication.
That's not a cop-out. It's just the reality. Technology moves faster than regulation. Platforms optimize for engagement, not wellbeing. The only durable defense is helping people, especially young people, develop their own capacity for discernment.
And now, Hilbert's daily fun fact.
Did you know that the average cumulus cloud weighs about one point one million pounds? That's roughly the weight of a hundred elephants, floating above your head.
If you're trying to make sense of where your own country stands on this issue, the most useful thing you can do is look up whether there's age verification legislation pending or in effect in your jurisdiction. It's not hard to find, and it'll tell you a lot about where the political conversation is heading. Beyond that, if you're a parent, have the conversation. If you're not, pay attention to what the platforms are doing in response to regulation, because their behavior tells you more about their business models than their press releases ever will.
I think the forward-looking question here is whether the Supreme Court takes up the age verification issue and what standard they set. A ruling that upholds state-level age verification laws would accelerate the trend significantly. A ruling that strikes them down would force a different regulatory approach. Either way, the conversation isn't going away.
This has been My Weird Prompts, produced by the indefatigable Hilbert Flumingtop. If you want more episodes, you can find us at myweirdprompts.com or wherever you get your podcasts. Leave us a review if you're so inclined. For Herman Poppleberry, I'm Corn. We'll be back soon.