Herman, I saw you messing with that camera in the hallway earlier. Are you trying to catch a ghost or just making sure Daniel doesn't steal your snacks?
Herman Poppleberry here, and for the record, the snacks are safe. But honestly, Daniel's prompt today actually made me look at those cameras in a whole new light. I was just checking the firmware and making sure the local encryption was actually, you know, doing its job. I realized I hadn't updated the security patches on our internal circuit since late twenty twenty-five, and in this climate, that is basically leaving the front door wide open.
It is a bit unnerving, isn't it? Today's prompt from Daniel is about the potential for deepfakes and digital twins, and whether this horror movie scenario of being cloned using just a few photos and some captured audio is already becoming our reality. He even mentioned writing a horror script about fake job interviews being used as a front for identity theft. It is one of those ideas that feels like it should be five years away, but the more we look into it, the more it feels like it happened yesterday.
That is such a plausible premise, which is exactly why it is terrifying. We have moved past the era where deepfakes were just funny videos of celebrities saying silly things or putting Nicolas Cage's face in every movie ever made. We are now in a space where the barrier to entry for creating a high fidelity digital clone of a private citizen has dropped through the floor. We are talking about the democratization of high-end surveillance tech.
Exactly. Daniel mentioned Low Rank Adaptation, or LoRA, for image generation and voice cloning. For those who might not be deep in the machine learning weeds, can you explain why LoRA is such a game changer for this specific threat? Because I remember when making a deepfake required a literal server farm and a PhD.
Right, so, in the early days of generative artificial intelligence, if you wanted to teach a model what a specific person looked like, you had to retrain a massive chunk of the neural network. That required a ton of computing power, thousands of images from every angle, and days of processing time. But Low Rank Adaptation is basically a mathematical shortcut. Instead of changing all the millions of weights in a massive foundation model like Stable Diffusion or the newer Flux models we are seeing here in twenty twenty-six, you are essentially adding a tiny, lightweight layer on top of it. It is like a specialized filter or a plugin that tells the model, hey, when you generate a person, make them look exactly like Corn. It is a file that is often less than a hundred megabytes, but it carries the entire essence of your visual identity.
And you only need a handful of photos for that? I think Daniel mentioned a few dozen.
Precisely. We are talking maybe twenty to thirty decent images. And they don't even have to be professional headshots. If you have been on LinkedIn, or if you have a public Instagram, or if you just walk past a high resolution security camera in a well lit shop, someone can easily get those twenty photos. The math behind LoRA allows the model to learn your specific facial features, your bone structure, even the way your hair falls or the specific way your eyes crinkle when you laugh, with incredibly high precision. By twenty twenty-five, the algorithms became so efficient that they could extract a three-dimensional understanding of your face from just a few two-dimensional snapshots.
And the voice side of things? Daniel mentioned a couple of minutes of audio, but I have seen claims that some models can do it with even less now.
Oh, the voice cloning technology has moved even faster than the image stuff. There are models now, like the latest versions from ElevenLabs or the open source variants like Tortoise and OpenVoice, that can get a very convincing likeness from just thirty seconds of clean audio. If you have ever posted a video of yourself speaking on TikTok, or if you take a phone call in public where someone is recording, you have provided enough data for a clone. They capture your prosody, your accent, your breathing patterns. It is not just a robotic imitation anymore; it is a functional vocal twin.
So, to Daniel's question: Is this already happening? Because the horror script idea about fake job interviews sounds like a perfect way to get that clean audio and those high quality headshots. You have a candidate sitting in front of a high definition webcam for forty-five minutes, talking about their life and their work. They are literally handing over the keys to their digital identity.
It is absolutely happening. In fact, it is not just a theory anymore. Back in February of twenty twenty-four, there was a massive story out of Hong Kong where a multinational firm was scammed out of twenty-five million dollars. The scammers used deepfake technology to pose as the company's Chief Financial Officer and several other employees in a video conference call. The one real person on that call, a clerk, felt like something was a bit off, but because everyone else on the screen looked and sounded like his colleagues, he went ahead and authorized several bank transfers. That was the first "deepfake heist" of that scale, but it certainly wasn't the last.
That is incredible. And that was two years ago. The technology has only become more accessible and more refined since then. I think what most people don't realize is that these attacks don't require some elite state sponsored hacking group anymore. The tools are open source. You can run them on a high end gaming laptop in your bedroom.
That is the part that keeps me up at night. It is the democratization of deception. When Daniel talks about the privacy implications of Internet Protocol cameras being everywhere, he is touching on the supply chain for this data. Think about smart doorbells, or the cameras in the grocery store that use facial recognition for sentiment analysis or to track your shopping habits. If that data isn't perfectly secured—and let's be honest, it rarely is—it is a goldmine for anyone wanting to build a digital twin of you. We saw a leak in late twenty twenty-five from a major retail analytics firm where over ten million facial signatures were exposed. Those weren't just photos; they were mathematical maps of people's faces.
It feels like we are leaving a trail of digital DNA everywhere we go. But I want to push on the job interview angle. That feels like the ultimate social engineering hack. You are not just getting the biometrics; you are getting the person's professional history, their speech patterns, their mannerisms, and you are doing it in a context where they are incentivized to be open and honest. They want to impress the interviewer, so they are leaning in, speaking clearly, and showing a range of expressions.
Right, and in a job interview, the victim is often doing most of the talking. They are providing the perfect training data for a voice model. And if it is a remote interview over Zoom or Microsoft Teams, you are getting a perfect, front facing video feed of their face in consistent lighting. It is literally the ideal dataset for a Low Rank Adaptation training run. There were reports last year of "ghost recruiters" on platforms like LinkedIn who would conduct these elaborate multi-stage interviews, only to vanish once they had enough footage. The candidates thought they just didn't get the job, but months later, their likenesses were appearing in AI-generated advertisements for products they never endorsed, or worse, being used to scam their own families.
So, has this been discussed by major regulatory bodies? I know the European Union has been quite active with their Artificial Intelligence Act. Where do they stand on this as we move into twenty twenty-six?
The European Union Artificial Intelligence Act is probably the most comprehensive framework we have right now. It was fully phased in over the last year, and it specifically addresses deepfakes by requiring that artificial intelligence generated content be clearly labeled. It also categorizes certain uses of biometrics as high risk or even prohibited. For example, untargeted scraping of facial images from the internet or Closed Circuit Television footage to create facial recognition databases is generally banned under the Act. They are trying to create a "right to physical anonymity" in digital spaces.
That sounds good on paper, but how do you enforce that against a scammer operating out of a different jurisdiction? If I'm in Berlin and the person cloning me is in a basement in a country that doesn't recognize EU law, what does that regulation actually do for me?
That is the twenty-five million dollar question. Regulation is great for legitimate companies, but it doesn't do much to stop a criminal. In the United States, we have seen some movement too. There is the ELVIS Act in Tennessee, which stands for the Ensuring Likeness Voice and Image Security Act. It was a big deal in twenty twenty-four because it was primarily designed to protect musicians from having their voices cloned, but it set a massive legal precedent. It moved the conversation away from just copyright and into the realm of personality rights. By twenty twenty-five, several other states followed suit, and there is now a federal push for the "NO FAKES Act" which would create a standardized right to your own likeness across the entire country.
I remember seeing that. It was interesting because it treats your face and voice like a property right, similar to a trademark. But let's look at the broader implications. If I can be cloned, what does that do to the concept of trust? If you get a video call from me tomorrow asking for the password to our shared server, how do you know it is actually me? We've worked together for years, you know my face, you know my voice. But if the AI knows them too, the foundation of our relationship is suddenly under threat.
This is where we get into what I call the death of the video evidence. For a hundred years, we have treated video and audio as the ultimate proof. Seeing is believing, right? But we are entering an era where seeing is just a suggestion. We are going to have to move toward a zero trust architecture for human interaction. It sounds paranoid, but it is the only logical response to a world where the sensory input can be faked with ninety-nine percent accuracy.
Zero trust for humans. That sounds like a very lonely way to live, Herman. Are we really going to have to start interrogating our friends and family every time they call us?
It doesn't have to be lonely, just more verified. Think about how we handle sensitive digital transactions now. We use multi-factor authentication. We might have to start doing that for conversations. If you call me asking for something sensitive, I might have to send a challenge code to your phone, or we might have to have a pre-arranged safe word. In fact, many security experts are now recommending that families have a "distress word" that can't be guessed by an AI, just in case of those "grandparent scams" where a deepfaked voice of a grandchild calls asking for bail money.
A safe word for the family. We have talked about that before, but it feels more urgent now. It is such a simple, low tech solution to a very high tech problem. But what about the more passive capture Daniel mentioned? The cameras in shops and on the street. You can't exactly have a safe word with every security camera you walk past while you're buying milk.
No, and that is where the privacy battle gets really difficult. There is this concept of digital exhaust. Every time you walk into a store that uses facial recognition for analytics—which is becoming the standard in twenty twenty-six—you are contributing to a database. Even if they claim to anonymize it, the biometric data itself is the identifier. You can change your name, you can change your password, but you can't easily change your face. Unless you want to go full cyberpunk and start wearing infrared-emitting LEDs in your hat to blind the sensors.
Which I've seen people do! But for the rest of us, what can we actually do? Are there technical protections against this kind of biometric harvesting that don't involve looking like a science fiction character?
There are some experimental technologies that have gained traction. There is a project called Fawkes, developed by researchers at the University of Chicago, and a similar one called Nightshade. These are tools that slightly alter your photos at the pixel level before you post them online. These changes are invisible to the human eye, but they "poison" the data for the facial recognition algorithms. They call it cloaking. It essentially makes the artificial intelligence see a different version of you, so any model trained on those photos would be slightly off—the eyes might be a few millimeters wide, or the jawline slightly different. It breaks the "fit" of the LoRA.
That is fascinating. It is like digital camouflage. But again, it only works for the photos you choose to upload. It doesn't protect you from the high definition camera hidden in the ceiling of the department store or the smart doorbell across the street.
Exactly. And that brings us back to the Internet Protocol camera issue. So many of these cameras are poorly secured. There are websites like Shodan that allow anyone to search for unsecured Internet of Things devices. You can literally find thousands of live camera feeds from people's homes, nurseries, and businesses just because they never changed the default password or because the manufacturer left a back door open. In twenty twenty-five, there was a major scandal involving a cheap brand of baby monitors that were being scraped by a botnet specifically to collect audio of children's voices for training data.
That is a basic security failure, but it is also a massive data leak for deepfake training. If a malicious actor can tap into a live stream of someone's home, they have hours and hours of perfect training data. They see how you move in your pajamas, they hear how you talk in a natural environment, they see your facial expressions from every angle while you're watching television. It is the ultimate digital twin kit.
And think about the psychological impact. Daniel mentioned his horror script. Imagine a world where you find out there is a version of you online that is doing things you never did, saying things you never said, and it is indistinguishable from the real you. It is a form of identity theft that goes so much deeper than just someone stealing your credit card number. It is someone stealing your essence, your reputation, and your ability to be believed. We are seeing "reputation insurance" become a real product in twenty twenty-six for this very reason.
It reminds me of that concept of the uncanny valley, but we are moving through it so fast. We used to be able to spot deepfakes because the eyes didn't blink right, or the mouth movements were slightly out of sync with the audio. But with the latest diffusion models and better temporal consistency, those "tells" are vanishing. I saw a video recently where they used a real-time deepfake filter on a live stream, and even when the person turned their head ninety degrees, the mask didn't slip.
They really are vanishing. One of the new tells people talk about now is looking at the background or the way light reflects off the skin—what they call "subsurface scattering"—but even those are being solved by better rendering engines. There are now real-time deepfake filters that can be applied to a live webcam feed during a video call. This isn't just for pre-recorded videos anymore. You can be talking to a clone in real time. This is why the "Proof of Personhood" movement is gaining so much momentum.
So, if we can't trust our eyes and ears, and we can't stop the cameras from capturing our data, are we just doomed to live in a world of constant suspicion? Is there a way out of this?
I think we are going to see a massive push for hardware-based security. This is the idea of using things like the C2PA standard—the Coalition for Content Provenance and Authenticity. It is a protocol where the camera itself cryptographically signs the image at the moment the shutter clicks. It creates a digital "birth certificate" for the photo or video. If a video doesn't have that signature, or if the signature is broken, your computer or phone will flag it as "unverified" or "AI-generated."
That sounds like a great solution for professional journalists or big platforms, but again, a scammer isn't going to use a camera that signs the footage. They are going to use the open source, uncensored versions. And what about the average person's smartphone?
Most major smartphone manufacturers—Apple, Samsung, Google—started integrating C2PA at the hardware level in their twenty twenty-five models. So, in the future, if I send you a video of me, your phone will be able to verify that it actually came from my physical device and hasn't been tampered with. It is a "chain of trust" from the sensor to the screen. But you're right, it doesn't solve the problem of the "uncensored" models. It just creates a "verified" lane and an "unverified" lane for information.
I want to go back to Daniel's horror script idea for a second. In his script, people are being cloned and then showing up in ads in Korea. That sounds like a comedic twist on a very real problem: the unauthorized use of likeness for commercial gain. We are already seeing this with actors and influencers. There was that case last year where a famous YouTuber found a deepfaked version of himself selling get-rich-quick schemes in three different languages.
We are. And the crazy thing is, for a long time, the law wasn't even clear on whether that was illegal if the person wasn't a "celebrity." That is why the shift toward "personality rights" for everyone is so important. We need to codify that your digital twin belongs to you, regardless of whether you are famous or not. But even if it is illegal, the damage can be done so quickly. By the time you get a cease and desist order, the video has been seen by millions of people, or the money has already been wired out of the company's account.
This is why I think the ultimate protection isn't technical or legal—it is cultural. We have to develop a healthy skepticism. We have to stop assuming that because we see someone's face and hear their voice, it is actually them. We have to move toward a culture of verification. It is a sad realization, but a necessary one. It is like when we all learned not to trust every email that claimed we had won a lottery in a country we had never visited. We developed a spam filter in our brains. Now we need to develop a deepfake filter.
Exactly. And that starts with understanding the technology. When you know that someone can create a LoRA of you with twenty photos, you become much more careful about where those twenty photos come from. When you know that a voice can be cloned in seconds, you don't give away your voice to every random "AI fun" app that asks for it. You have to treat your biometrics like your social security number—something that is incredibly hard to change once it is compromised.
I think about the cameras in Jerusalem, where we live. There are cameras everywhere in the Old City for security. Those are government-run, but who has access to that data? What happens if that database is breached? We've seen "smart city" data leaks in other parts of the world over the last few years.
That is the big one. State-level biometric databases are the ultimate prize for bad actors. If you can compromise a national facial recognition database, you don't even need to hunt for photos on Instagram. You have the perfect training data for every citizen in the country. This is why the debate over "biometric sovereignty" is going to be the biggest privacy battle of the late twenty-twenties. Who owns the mathematical representation of your face? You, or the company that manufactured the camera that saw you?
It is a lot to take in. Daniel really hit on a nerve with this one. It feels like we are living through a transition period where the old rules of reality are being rewritten. We are moving from the "Information Age" into the "Verification Age."
They definitely are. But I don't want to leave everyone feeling completely hopeless. Technology gave us this problem, but it also gives us tools to manage it. We just have to be proactive. We have to be the ones in control of our digital twins, not the other way around. Awareness is the first step. If you know the trick, it is much harder for the magician to fool you.
Well said. I think the takeaway for me is that we need to be as intentional about our biometrics as we are about our passwords. Don't just leave your face and voice lying around for anyone to pick up. And if you are applying for a job, maybe do a little extra digging into who is actually on the other side of that Zoom call.
And maybe keep an eye on those Internet Protocol cameras. If they aren't serving a real purpose, maybe it is time to turn them off. Or at the very least, make sure they aren't shouting your life out to the entire internet because you're still using "admin one two three" as your password.
Good advice. And I'll be keeping a close eye on you, Herman. If you start acting too much like a perfectly optimized version of yourself, I'm going to start asking for our secret safe word. Which, for the record, is "Pickled Kumquat."
Well, now everyone knows it, Corn! You just compromised our security!
Oh, right. Well, I'll think of a new one. This has been a fascinating dive into a topic that is only going to get more relevant. Daniel, thanks for the prompt—it definitely gave us a lot to chew on, even if it is a bit unsettling.
Yeah, it is a weird world we are building. But talking about it is the first step toward not getting lost in it. We have to stay human in a world of clones.
Absolutely. And hey, if you are listening to this and you found this discussion helpful—or maybe just the right amount of terrifying—we would really appreciate it if you could leave us a review on your favorite podcast app. It really helps other people find the show and join the conversation about these weird prompts.
It genuinely does. We love seeing the feedback and it helps us keep going deep on these topics. We are always looking for more prompts that challenge our understanding of technology and privacy.
You can find My Weird Prompts on Spotify, Apple Podcasts, and pretty much everywhere else you listen. Also, check out our website at myweirdprompts dot com for the full archive and a contact form if you have your own weird prompt you want us to explore.
You can also email us directly at show at myweirdprompts dot com. We love hearing from you, even if it's just to tell us your own deepfake horror stories.
Thanks for listening, everyone. We will be back soon with more deep dives and brotherly banter.
Until next time, stay curious and maybe double-check those camera settings before you sit down for your next interview.
Goodbye, everyone!
Goodbye!