Daniel sent us this one — and it's actually a natural follow-up to something we talked about recently. All those unauthenticated endpoints floating around the public internet. We said pulling data from them isn't hacking, it's just tapping into open data streams that probably weren't meant to be found. But that raises the obvious next question. What actually IS legally definable hacking? Daniel wants us to walk through the origins of cybercrime prosecution — from the early laws like the U.Computer Fraud and Abuse Act and its international counterparts, through the landmark cases that drew the line around unauthorized access, right up to where that line sits today between curious URL-poking and felony intrusion. And then the flip side. How does law enforcement actually stand up cybercrime units to outsmart real hackers? How are these teams recruited, trained, and resourced, and how do they keep pace with adversaries who move at internet speed?
This is one of those areas where the legal framework was built for a world that barely exists anymore, and yet we're still operating under it. Also, quick note — DeepSeek V four Pro is writing our script today, so if anything sounds unusually coherent, that's why.
I'll take that as a compliment to our usual incoherence.
So let's start with the Computer Fraud and Abuse Act, the C. This is the granddaddy of cybercrime law in the United States, passed in 1986. And the context matters enormously here. This was the era of the movie WarGames. Matthew Broderick, dial-up modems, global thermonuclear war. Congress was responding to a very specific cultural panic about teenage hackers breaking into government systems.
Which is already telling you something about the law's DNA. It was born from a moral panic, not a careful taxonomy of computer access.
was actually an amendment to an existing wire fraud statute, and it made it a federal crime to access a computer without authorization or to exceed authorized access. Those two phrases — "without authorization" and "exceeds authorized access" — have been litigated for nearly four decades now, and we still don't have a perfectly clean definition of either one.
That ambiguity is the whole ballgame, isn't it? Because "without authorization" sounds straightforward until you ask what counts as authorization. Is it a terms of service document nobody reads? Is it a robots dot txt file? Is it the fact that a server didn't bother to put up a login page?
And the early cases were mostly clear-cut. The first major prosecution under the C. was the Morris worm case in 1988. Robert Tappan Morris, a Cornell graduate student, released a worm that accidentally brought down about ten percent of the internet. He was convicted under the C. , sentenced to probation and community service. That one was straightforward — he clearly accessed systems without authorization and caused damage.
The law started getting stretched almost immediately. By the late nineties and early two thousands, prosecutors were using the C. in ways that made a lot of people uncomfortable. The most famous example is probably the Lori Drew case.
Oh, this one is infamous. Two thousand eight. Lori Drew created a fake MySpace account, used it to harass a thirteen-year-old girl named Megan Meier, and Megan tragically took her own life. The local prosecutor couldn't find a state crime that fit, so federal prosecutors charged Drew under the C. The theory was that she violated MySpace's terms of service, and therefore her access was "unauthorized" or "exceeded authorized access.
Which is an absolutely terrifying legal theory when you think about it. If violating a website's terms of service is a federal crime, then basically everyone who's ever lied about their age on a sign-up form or used a fake name on social media is a federal criminal.
The judge in that case ultimately threw out the conviction. The jury had convicted on misdemeanor counts, but the judge granted a motion for acquittal, basically saying that treating terms of service violations as federal crimes would make the C. You can't criminalize conduct that ordinary people wouldn't understand to be criminal.
That was a near-miss. But the underlying statute didn't change. still has that same vague language. And it's been used in some pretty aggressive ways since then. Aaron Swartz comes to mind.
Yeah, the Swartz case is the one that really galvanized public attention. Two thousand eleven, Aaron Swartz — a prominent programmer and activist, co-founder of Reddit, major contributor to RSS and Creative Commons — he downloaded millions of academic journal articles from J-STOR using M.He had legitimate access to J-STOR as a Harvard fellow, but he was scraping at a scale that violated the terms, and he kept evading M.'s attempts to block him. Federal prosecutors charged him with multiple C. counts, and he faced up to thirty-five years in prison and a million dollars in fines.
He took his own life in two thousand thirteen while the case was pending. That case became a rallying cry for C. But here's what I think a lot of people miss about it. Swartz's technical actions — bypassing M.address blocks, plugging into network closets — those are different from what we were talking about with unauthenticated endpoints. He was actively circumventing access controls. That's a meaningful distinction.
It is, and that distinction is central to how courts have tried to draw the line. Which brings us to one of the most important recent cases on this question — hiQ Labs versus LinkedIn. This ran from twenty seventeen through twenty twenty-two, and it addressed the exact scenario we've been talking about. hiQ was a data analytics company that scraped publicly available LinkedIn profiles. No password cracking, no bypassing authentication. Just collecting data that anyone with a browser could see. LinkedIn sent a cease-and-desist letter, argued this violated the C. , and also deployed technical blocking measures.
HiQ sued for an injunction, arguing that if the data is publicly accessible, accessing it can't be "without authorization" under the C.
The Ninth Circuit Court of Appeals initially sided with hiQ in twenty nineteen, ruling that when a website makes data publicly available without any authentication requirement, accessing that data is not "without authorization" under the C. , even if the website owner objects. The court was particularly concerned that if merely sending a cease-and-desist letter could turn public web scraping into a federal crime, companies would have enormous power to control who can access publicly available information.
Which is exactly the point we made in our earlier discussion. If you put data on a public-facing server with no authentication, you've published it. The genie's out of the bottle.
The story doesn't end there. In twenty twenty-one, the Supreme Court vacated the Ninth Circuit's ruling and sent it back for reconsideration in light of the Van Buren case. Then in twenty twenty-two, the Ninth Circuit issued a new ruling that actually reaffirmed its earlier position — hiQ's scraping of public LinkedIn profiles did not violate the C. , and it affirmed a permanent injunction against LinkedIn.
The current state of play, at least in the Ninth Circuit, is that scraping publicly accessible data is not a C. But that's one circuit. What about the rest of the country?
That's the problem. We don't have a uniform national rule. And the Supreme Court has weighed in on one piece of this with Van Buren versus United States in twenty twenty-one, a landmark decision that narrowed the C. in an important way.
Van Buren is the case everyone should know about. Walk us through it.
Nathan Van Buren was a police sergeant in Georgia. He had legitimate credentials to access a law enforcement database. informant asked him to run a license plate search in exchange for money. He did it. He was charged under the "exceeds authorized access" prong of the C. The government's theory was that he was authorized to access the database for law enforcement purposes, but accessing it for personal profit exceeded that authorization.
Which sounds plausible on its face. If you're a cop and you use the police database to help a criminal, that feels like it should be illegal.
Right, and there are other laws that cover that — bribery statutes, corruption laws. But the question for the Supreme Court was specifically about the C. And in a six-to-three decision, the Court ruled that "exceeds authorized access" refers to accessing areas of a computer system that are off-limits to you, not to accessing areas you're permitted to access but for an improper purpose.
The key distinction is gates, not motives. If you have the technical authorization to access a file, accessing it doesn't become a C. violation just because your reasons are bad.
Justice Barrett wrote the majority opinion, and she was very clear. is about whether you're allowed to be where you are in the system, not about why you're there. This rejected the government's long-standing interpretation that any violation of a use policy or terms of service could be a federal crime.
Van Buren draws a relatively bright line. Authorization is about technical access controls, not about purpose. Which means if a server doesn't have an authentication gate, you're authorized to access what's there. That seems to align with the hiQ outcome.
It does, but there are still open questions. What about when a website deploys technical blocks, like I.address blocking or rate limiting, and you circumvent them? That was part of the hiQ case too, and the Ninth Circuit said that hiQ's circumvention of LinkedIn's technical blocks didn't change the analysis because the data was still publicly accessible. But other courts might see that differently.
Let's broaden out. We've been focused on the U., but Daniel asked about the international landscape too. What does this look like in the U.
has the Computer Misuse Act of nineteen ninety, which is broadly similar to the C. It criminalizes unauthorized access to computer material, unauthorized access with intent to commit further offenses, and unauthorized acts with intent to impair the operation of a computer. The key phrase again is "unauthorized.
I remember there was a case where a British man accessed a website by changing a number in the U.— basically an I.vulnerability, insecure direct object reference. And he was prosecuted. That felt aggressive.
That's a recurring pattern in U.Multiple cases where individuals were prosecuted under the Computer Misuse Act for what security researchers would call trivial access — changing a digit in a U.to see another user's data. doesn't have the equivalent of Van Buren's narrowing. The Crown Prosecution Service has taken a relatively broad view of what counts as unauthorized access.
Which creates a real chilling effect for security researchers in the U.If you find an I.vulnerability and test it by changing one U.parameter, you might have just committed a crime. Meanwhile, the actual criminals are exploiting the same vulnerability at scale.
The European Union took a somewhat different approach with the Directive on Attacks Against Information Systems, adopted in twenty thirteen and updated since. It requires member states to criminalize illegal access to information systems, but it specifically says that the offense must be committed by infringing a security measure. So there's an explicit requirement that some kind of security barrier was crossed. That's closer to the Van Buren approach than the broader U.
What about Israel? Daniel's sitting in Jerusalem, so he's probably curious about how this plays out under Israeli law.
Israel's Computers Law from nineteen ninety-five defines unauthorized access as accessing a computer without permission, and the courts have interpreted "without permission" to include accessing material beyond what was explicitly permitted. There's a notable case from the early two thousands where the Israeli Supreme Court dealt with an employee who accessed his employer's computer system for personal purposes, and the court drew a distinction between technical access authority and purpose-based restrictions. It's evolved in a direction somewhat similar to the U., though without a single definitive Supreme Court ruling like Van Buren.
The global picture is a patchwork. has moved toward a technical-access-controls standard, the E.explicitly requires breaching a security measure, the U.remains relatively broad and aggressive, and Israel is somewhere in the middle. That's a lot of legal risk for anyone doing security research or even just curious poking around.
That's before we even get to how these laws interact with bug bounty programs and authorized security research. Most companies now have vulnerability disclosure policies that say "if you find a bug, tell us and we won't sue you." But those are contractual promises, not legal safe harbors. The Department of Justice has issued guidance saying it won't prosecute good-faith security research under the C. , but guidance isn't law, and it can change with a new administration.
The legal landscape is a mess, which brings us to the second half of Daniel's question. Given all this complexity, how does law enforcement actually build units capable of investigating and prosecuting cybercrime? Because the stereotype of the technologically illiterate cop fumbling through a digital investigation isn't entirely wrong, especially in the early days.
The transformation over the past twenty years has been dramatic. began building dedicated cyber squads in the late nineties, but the real turning point was the creation of the National Cyber Investigative Joint Task Force in two thousand eight. This brought together the F., Secret Service, N., and other agencies under one roof to coordinate cyber investigations.
The Secret Service has a cyber role too, which surprises people who think they just guard the president.
The Secret Service actually has one of the oldest federal cybercrime mandates, dating back to the original computer fraud statutes. They focus on financial crimes, so they've been doing digital forensics since the eighties. Their Electronic Crimes Task Forces, which started in New York in nineteen ninety-five, were some of the first multi-agency cybercrime units in the world.
Here's what I want to know. How do they recruit? Because the salary differential between a talented security researcher and a federal agent is enormous. Someone who can reverse-engineer malware can make three or four times as much in the private sector.
This is the fundamental challenge, and agencies have gotten creative. runs a cyber internship program that recruits from universities, and they've created special hiring authorities that allow them to bring in technical talent at higher pay grades than standard federal hiring would permit. Cyber Command and the N.have similar programs. But the real answer for many agencies is that they don't try to out-compete the private sector on salary. They compete on mission.
Meaning they sell the sense of purpose. You're defending the country. You're taking down child exploitation networks. You're disrupting ransomware gangs that are attacking hospitals.
And for a certain kind of person, that genuinely works. I've read profiles of agents who left six-figure security jobs to join the F.'s cyber division because they wanted to do something more consequential than protecting a corporation's bottom line. But it's not enough to fill all the gaps. There's a persistent shortage of technical talent in law enforcement cyber units.
What about training? Because even if you recruit someone with raw technical talent, investigating cybercrime requires a different skill set than penetration testing. You need to understand evidence handling, chain of custody, legal procedures.
The training pipeline has become quite sophisticated. runs its own cyber training academy at Quantico, where new cyber agents go through a specialized curriculum covering digital forensics, network analysis, malware reverse engineering, and the legal framework for cyber investigations. The Secret Service has the National Computer Forensics Institute in Alabama, which trains not just federal agents but also state and local law enforcement.
State and local is a whole other problem. and Secret Service can at least attract top talent. Your average county sheriff's department doesn't have a prayer of hiring someone who understands memory forensics.
Right, and that's where the task force model becomes critical. 's Cyber Task Forces embed federal agents with state and local departments, so a local police department investigating a ransomware attack on a school district can access federal expertise and resources. There are currently about fifty-six field offices with cyber squads, and each one runs a task force that includes state and local partners.
Let's talk about the international dimension, because cybercrime doesn't respect borders. A ransomware gang might be based in Russia, using infrastructure in the Netherlands, targeting a hospital in Ohio. How do you investigate that?
This is where Europol's European Cybercrime Centre, E.three, comes in. Established in two thousand thirteen in The Hague, it serves as the coordination hub for cybercrime investigations across E.They don't have their own arrest powers — they facilitate cooperation between national police forces, providing analytical support, forensic expertise, and running joint operations.
They've had some notable successes. I remember the takedown of the Avalanche network a few years back.
Avalanche was a massive operation in twenty sixteen — a global botnet infrastructure used for phishing, malware distribution, and money laundering. The operation involved law enforcement from more than thirty countries, coordinated through Europol and the U.Department of Justice. They seized servers, arrested key operators, and sinkholed the botnet's command-and-control infrastructure. That kind of operation is only possible with the international coordination that Europol enables.
Here's the thing that keeps me up at night. The Avalanche takedown took years of planning. Meanwhile, the adversaries are innovating constantly. Ransomware-as-a-service, crypters that evolve faster than antivirus signatures, infrastructure that spins up and down in hours. Law enforcement operates on a timeline of months and years. Cybercriminals operate on a timeline of hours and days.
The asymmetry is real, and it's probably the single hardest problem in cybercrime enforcement. There's a concept from military theory called the O.loop — observe, orient, decide, act. The speed at which an organization can cycle through that loop determines its effectiveness. Cybercriminals have an incredibly tight O.Law enforcement, by its nature, has a much slower one.
Because they have to get warrants, coordinate across agencies, build cases that will hold up in court. A ransomware gang just has to find one unpatched server.
And the legal frameworks create friction that the criminals don't face. If the F.wants to seize a server in Germany, they need to go through the mutual legal assistance treaty process, which can take months. A criminal group just rents a server in Germany with stolen credentials and they're operational in minutes.
How do they compensate for that?
A few approaches. One is disruption over prosecution. Rather than trying to build a perfect case that will lead to a conviction three years from now, agencies increasingly focus on disrupting criminal infrastructure in real time. Seize the domains, sinkhole the botnet, freeze the cryptocurrency wallets. Even if you can't arrest anyone, you can raise the cost of doing business.
Which is a significant shift in law enforcement philosophy. Traditionally, the goal was always prosecution. Disruption as an end in itself is relatively new.
It's controversial in some circles. Civil liberties advocates worry about law enforcement taking disruptive actions without the due process that a criminal prosecution would require. But from a practical standpoint, when a hospital is being held hostage by ransomware, you can't wait three years for a trial.
The other approach I've seen is embedding technical talent directly into investigative teams, rather than having a separate forensics lab that handles evidence after the fact. Get the security researcher in the room while the investigation is unfolding.
That's exactly what the F.'s cyber squads do. A cyber squad typically includes special agents, intelligence analysts, and computer scientists all working together. The computer scientists are not sworn agents — they're civilian technical experts — but they're embedded in the investigative process from day one. That model has been adopted by most major cybercrime units globally.
Let's talk about one more thing that I think is underappreciated. The role of private sector partnerships. A huge amount of cybercrime investigation depends on information from companies like Microsoft, Google, Cloudflare, and the major cybersecurity firms.
The internet's infrastructure is almost entirely privately owned, so law enforcement has no choice but to work with the private sector. Microsoft's Digital Crimes Unit has been involved in some of the largest botnet takedowns. They have the technical visibility into their own infrastructure that law enforcement can't get without a warrant, and they can act faster than the government can. The Nemesis botnet takedown, the TrickBot disruption — these were public-private partnerships.
Which raises its own set of questions about accountability and oversight. If a private company is effectively acting as an arm of law enforcement, what due process protections apply? But that's probably a whole separate episode.
It is, and it's a deep one. Let me touch on one more aspect of law enforcement's technical capability that often gets overlooked. The rise of cryptocurrency tracing. Five years ago, the conventional wisdom was that Bitcoin was anonymous and law enforcement couldn't follow the money. That turned out to be completely wrong.
Because the blockchain is a permanent public ledger. Every transaction is recorded forever.
And companies like Chainalysis, CipherTrace, and Elliptic have built sophisticated tools that can trace cryptocurrency flows through mixers, tumblers, and across chains. The Colonial Pipeline ransomware attack in twenty twenty-one — the F.recovered a significant portion of the ransom payment because they were able to trace it through the blockchain and seize the wallet. That would have been unthinkable five years earlier.
The technical capability is actually evolving faster on the law enforcement side than the public narrative suggests. They're not as slow as they used to be.
They're not, but the gap is still there. We're seeing ransomware groups run professional H.departments, with paid time off and performance bonuses. They're operating like startups. And state-sponsored groups have resources that rival small nations.
Which brings me to a question I think is worth sitting with. If the legal framework for defining hacking is a patchwork of ambiguous statutes from the nineteen eighties and nineties, and law enforcement is perpetually playing catch-up with adversaries who innovate constantly, are we actually losing this fight? Or is the glass half full?
I think the honest answer is that it depends on what you're measuring. If you're measuring convictions of major cybercriminals, the numbers are not encouraging. The vast majority of cybercriminals operating from non-extradition countries will never see the inside of a courtroom. But if you're measuring disruption of criminal infrastructure and raising the cost of operations, there's been real progress. The average lifespan of a phishing domain has dropped dramatically. Botnet takedowns are more frequent and more effective. Ransomware payments have become harder to launder.
We're getting better at making cybercrime harder and more expensive, even if we're not getting better at putting people in prison. That's a nuanced outcome, but it might be the realistic one.
I'd add one more thing. The legal definition of hacking, for all its flaws, has actually become clearer over the past decade. Van Buren drew a line. The hiQ case drew a line. These aren't perfect lines, and there are still gray areas, but we're in a better place than we were fifteen years ago when merely violating a terms of service could theoretically land you in federal prison.
The practical takeaway for anyone listening who's curious about security research or just likes poking around the internet: understand where the lines are, and understand that those lines vary by jurisdiction. In the U., the trend is toward a technical-access-controls standard. If there's no authentication gate, accessing data probably isn't a C. But that's not a guarantee, and it's not the law everywhere. takes a broader view. And even in the U., actively circumventing technical blocks — not just accessing open endpoints, but defeating I.bans or rate limits — puts you in murkier territory.
If you're doing security research, do it through established channels when possible. Bug bounty programs, vulnerability disclosure policies. These provide at least some contractual protection, even if they're not legal safe harbors. 's policy of not prosecuting good-faith security research is helpful, but it's a policy, not a law.
Now: Hilbert's daily fun fact.
The average cumulus cloud weighs approximately one point one million pounds, roughly the same as one hundred elephants. It stays aloft because the weight is distributed across millions of tiny water droplets spread over a vast volume of air.
What can listeners actually do with all this? First, if you're building systems, be intentional about what's public and what's not. The legal framework increasingly treats an unauthenticated endpoint as a published resource. If you don't want the world to see it, put it behind authentication. Second, if you're a security researcher or just a curious person, know the legal landscape in your jurisdiction. trend is toward protecting access to publicly available data, but that's not universal. Third, if you're interested in the law enforcement side, the agencies are hiring. They need technical talent, and they've made the application process more accessible than it used to be.
If you're just a regular person trying to understand why cybercrime seems so hard to stop, I think the key insight is the asymmetry. Law enforcement operates within legal frameworks designed to protect civil liberties, which is a feature, not a bug. But those frameworks create friction that criminals don't face. The realistic goal isn't to eliminate cybercrime — it's to make it harder, more expensive, and less profitable, and on that front, there's been measurable progress.
One open question I'm left with. As artificial intelligence becomes more capable, both sides of this equation are going to change. Criminals will use A.to automate attacks at a scale we haven't seen. Law enforcement will use A.to analyze evidence and trace criminal infrastructure. Which side benefits more from that technological shift? I honestly don't know, but I suspect it's going to define the next decade of this fight.
And I think it cuts both ways. lowers the barrier to entry for attackers — it makes sophisticated attacks accessible to people with limited technical skills. But it also has the potential to supercharge law enforcement's analytical capabilities. Imagine being able to feed an A.system the entire blockchain and have it automatically trace ransomware payments through every mixer and tumbler in real time. That's not science fiction. That's where we're heading.
Thanks to Hilbert Flumingtop for producing. This has been My Weird Prompts. If you want more episodes, find us at myweirdprompts dot com or wherever you get your podcasts. We'll be back with another one soon.