Seven hundred episodes, Herman. Can you believe we have actually sat in these chairs and talked into these microphones seven hundred times? It feels like just yesterday we were recording episode one in your basement with those foam pads taped to the walls.
Herman Poppleberry here, and honestly, Corn, my throat is a little sore just thinking about it. But what a milestone. Seven hundred conversations, and we are still finding things that blow our minds. Back then, we were talking about basic image generators that couldn't even do fingers correctly. Now, look at where we are.
It helps that the world keeps getting weirder. And today's prompt from Daniel is a perfect example of that. He is looking at the shift from AI music, which we have covered quite a bit lately, to the world of AI video. He is specifically curious about how the big players, like Netflix and the major studios, are handling this. Are they embracing it for things like B-roll, or is there a wall of policy standing in the way?
It is such a timely question because we are seeing this exact transition happen in real time. Music was the canary in the coal mine, in a way. It was easier to generate, smaller files, less compute power required. We had that explosion with Suno and Udio where suddenly anyone could make a high-fidelity pop song. But video? Video is the final boss of generative AI. It is the most data-intensive, computationally expensive, and legally complex medium we have.
Right, and Daniel mentioned that transition from being a fun, experimental tool to something that actually threatens, or at least disrupts, the professional industry. In music, we saw that inflection point where people went from laughing at weird AI songs to realizing, wait, this actually sounds like a radio hit. Now, video is hitting that same point. We are in February of twenty twenty-six, and the clips we are seeing from Sora three and Runway Gen-four are indistinguishable from high-end cinematography.
Exactly. And the stakes in video are arguably much higher because the budgets are so much larger. When you are talking about a two hundred million dollar feature film, the incentive to shave off even five percent of the production cost using AI is massive. But so is the pushback from the people whose jobs are on the line. We saw the strikes of twenty twenty-three and twenty-four, but the "peace treaties" signed then are already being tested by the technology of twenty twenty-six.
So let’s start with the current state of things. Daniel mentioned that AI video is still expensive and experimental. Is that still the case here in early twenty twenty-six, or have we moved past that?
We are in a bit of a transition period. A year ago, generating a high-quality ten-second clip might have cost you a significant amount in compute time and subscription fees. Today, the efficiency has improved, but it is still not "cheap" in the way a text prompt is. If you are a solo creator, you are probably paying three or four hundred dollars a month for a pro-tier subscription to something like Runway or the latest Luma model. But for a studio like Netflix? The cost of the AI isn't the issue. It is the legal and union-related cost that they are worried about.
That is an important distinction. For a YouTuber, the "cost" is the subscription. For Netflix, the "cost" might be a massive lawsuit or a strike by the Screen Actors Guild. Let’s talk about those policies. What is the word on the street regarding major studio guidelines?
Well, if you look at the major agreements that came out of the big strikes, the groundwork was laid there. Most studios, including Netflix and Disney, have had to be very careful. Their current internal policies generally fall into three categories: disclosure, consent, and copyrightability. Netflix, specifically, has implemented what they call the "Synthetic Media Transparency Framework."
That sounds very corporate. What does it actually mean for a director working on a Netflix Original?
It means that every single frame of AI-generated content must be logged in a central database. Disclosure seems like the easiest one, but it is actually the most scrutinized. Netflix has been experimenting with internal AI tools for things like "smart" color grading and automated subtitling for years. No one cares about that. But the moment you use AI to generate a landscape or a background character, the unions want a label. There is a push for a universal "AI-Assisted" tag in the credits, similar to how we credit VFX houses.
And what about the "B-roll" aspect Daniel mentioned? That seems like the most logical entry point. If I need a shot of a generic rainy street in London at night, and I don’t want to fly a crew out there, why not just generate it?
That is exactly where the friction is. Studios love the idea of "synthetic B-roll." It is a massive cost saver. Think about the logistics of a B-roll shoot: permits, lighting, travel, insurance. You can replace a fifty-thousand-dollar night shoot with a five-hundred-dollar compute run. But the policy right now, especially at Netflix, is very conservative. They are terrified of the copyright issue. Because current United States law—specifically the rulings we saw from the Copyright Office in twenty twenty-four and twenty twenty-five—generally says that AI-generated content cannot be copyrighted without "substantial human creative control."
That is a fascinating legal trap. If I make a movie where ten percent of the shots are AI-generated B-roll, can someone else just take those shots and use them in their own movie without paying me?
Theoretically, yes. If those shots are purely generative and haven't been "substantially transformed" by a human, they might fall into the public domain. Now, the studios are trying to get around this by having human artists "paint over" or heavily edit the AI output so they can claim human authorship. They call it "AI-Augmented Artistry." But the legal precedent is still being written. We are waiting for a landmark case to decide exactly how much "painting over" is required to make it copyrightable.
It feels like the music industry went through this faster because the barrier to entry was lower. With Suno, you just type a prompt and get a song. With video, even "scrappy" AI video requires a lot of technical know-how to make it look professional.
It does. And that is why the debate in film is focusing more on "tools" rather than "replacements." In the music industry, the fear was that AI would replace the songwriter entirely. In film, the conversation is more about whether AI will replace the camera crew, the location scout, or the background actors. You mentioned the "Suno moment" for video—we are seeing that now with tools like Kling and Sora. They allow for "temporal consistency," which was the big hurdle. The characters don't morph into blobs every three seconds anymore.
You mentioned consent earlier. That has to be a huge part of the Netflix policy, especially after all the drama with digital twins. We saw those high-profile cases in twenty twenty-five where actors were finding their likenesses in "test footage" they never shot.
Huge. Netflix, in particular, has a very strict policy regarding the "digital likeness" of actors. They cannot use AI to modify a performer's face or voice without explicit, negotiated consent and additional compensation. This was a major sticking point in the SAG-AFTRA negotiations. You can't just take a side character and use AI to make them look like they are in a different scene. Even for "background atmosphere" actors, the studios now have to pay a "digital usage fee" if they scan them and use their likeness in future episodes.
I remember seeing some behind-the-scenes stuff where they were using AI to fix "eye contact" in post-production. Like, if an actor was looking slightly off-camera, they could use a generative model to nudge their pupils. Does that fall under the same policy?
It is a gray area, but generally, studios view that as "technical cleanup," similar to digital makeup or wire removal. Where it gets controversial is when you are creating brand new performances. There is a lot of talk about "synthetic performers" who don't exist in real life. If Netflix produces a show where the lead actor is entirely AI-generated, they don't have to pay a human actor, and they don't have to worry about union rules for that "person."
But they also can't copyright that character's appearance easily, right?
Exactly. It is a double-edged sword. You save money on the actor, but you lose the "intellectual property" protection that makes the character valuable for merchandising or sequels. If you can't own the character, you can't stop a toy company from making action figures of your "lead actor." That is why we haven't seen a "Pure AI" blockbuster yet. The business model of Hollywood is built on owning things, and you can't own AI output yet.
It is interesting to compare this to Daniel’s point about the music industry. In music, the big labels like Universal and Sony are suing the AI companies for training on their catalogs. Is that happening in the video world too? Are the movie studios suing the AI video generators?
It is actually a bit more complicated. Some studios are suing, but others are actually partnering with the AI companies. For example, we saw Warner Brothers and Disney looking into licensing their entire back catalog to AI companies so those companies can build "Studio-Specific" models. Imagine a Disney-only AI video generator that is trained only on Disney films. That way, the output is "clean" from a copyright perspective because the studio owns the training data.
Oh, that is a clever workaround. It turns the "threat" into a proprietary tool. "Our AI is better than your AI because we have a hundred years of high-quality footage to train it on."
Exactly. And that is where the music industry might have missed a trick. The labels spent so much time trying to stop the technology that they didn't focus on building their own "walled garden" models early enough. The film industry, seeing what happened to music, is trying to control the technology rather than just banning it. They want to be the ones who own the "Director-GPT" that everyone uses.
Let’s go back to the B-roll idea for a second. Daniel mentioned that as a YouTuber, he finds it helpful for shots that don't look "stock library-ish." Why is AI B-roll better than traditional stock footage?
Because it is bespoke. If you go to a stock site like Getty or Shutterstock, you are looking for a video that already exists. You might find "man walking in rain," but he is wearing a red jacket and you need him to be wearing a blue one. With AI, you can specify every detail. "Man in blue raincoat walking down a neon-lit street in Tokyo during a light drizzle, forty-five-degree angle, cinematic lighting." You get exactly what you need for your specific story. It eliminates the "close enough" compromise.
And for a big production, that means they don't have to settle for "good enough." They can generate the perfect transition shot. But Herman, what about the technical limitations? Daniel said it is still "experimental." What are the actual glitches we are seeing in twenty twenty-six?
The biggest hurdle for AI video is still "temporal consistency" and "physics awareness." In a single frame, AI can look amazing. But in a five-second clip, the man's jacket might change shades of blue, or the raindrops might start moving sideways, or his face might slightly morph. We call it "shimmering" or "hallucinating motion." For a high-end Netflix production, that "shimmer" is unacceptable. It looks cheap. It takes the viewer out of the "suspension of disbelief."
So, for now, if we see AI in a Netflix show, it is likely buried in the background or used for very specific visual effects that are then touched up by a human artist.
Most likely. There was actually a minor scandal last year where a show used AI-generated images in the background of a scene and people spotted the "AI artifacts"—you know, the classic six-fingered hand or a distorted face in a photo on the wall. The backlash was intense. It makes the studio look lazy. That is the social cost. It is not just legal or financial; it is a brand risk. If Netflix becomes known as the "AI content factory," do people stop valuing their original programming?
That is the million-dollar question. In the music industry, there is a growing "human-made" movement. People are seeking out artists who play real instruments and write their own lyrics as a reaction to the flood of AI music. I suspect we will see the same thing in film. You will see "Shot on Film" or "One Hundred Percent Human Crew" becoming a marketing badge of honor.
It is the "Vinyl Record" effect but for visual media. We crave the soul, the intent, and even the mistakes of human creators. But let’s not discount the "scrappy" side Daniel mentioned. For an independent filmmaker with a budget of five thousand dollars, AI video is a godsend. It allows them to tell stories that would have required a five-million-dollar budget ten years ago. It is democratizing the "visual" while putting the pressure back on the "vision."
That is the democratization argument. It is the same one we heard with digital cameras, then with YouTube, and now with AI. It lowers the floor for who can create. But does it also lower the ceiling for what we consider "special"?
It definitely raises the floor. If everyone can generate a "cinematic" shot of a dragon, then a cinematic shot of a dragon is no longer impressive. The value moves away from the "visual" and back to the "storytelling." The pacing, the emotional resonance, the character development—those are things AI still struggles to get right on a structural level. It can give you a beautiful shot, but it can't tell you why that shot matters in the context of a ninety-minute film.
I want to dig into the "expensive" part of Daniel's prompt. He said it is still very costly. Why is video so much more resource-intensive than music or text?
It comes down to the sheer amount of data. Think about a single frame of high-definition video. That is roughly two million pixels. Now, multiply that by twenty-four or thirty frames per second. For a ten-second clip, you are looking at hundreds of millions of data points that all have to be "predicted" by the AI model in a way that remains consistent from one frame to the next. The GPU compute power required is staggering.
And the AI isn't just "drawing" the frames; it is trying to understand the physics of the scene.
Right! It has to "know" that if a ball is thrown, it follows a certain arc. If light hits a surface, it reflects a certain way. To do that, the model has to run through billions of parameters for every single frame. The electricity bill for training these models is in the hundreds of millions of dollars. That is why companies like Nvidia are the ones really winning this "war"—they are selling the shovels for the AI gold rush.
So, when Daniel says it is expensive, he is talking about the thousands of dollars in server time it takes to train these models and the significant cost to run an "inference"—which is the fancy word for just generating the video.
Exactly. To give you a sense of scale, generating a high-quality AI video clip can consume as much electricity as charging your phone hundreds of times. When you scale that up to a professional production level, the "electricity bill" for your movie starts to look like a major line item. This is why we haven't seen "AI Netflix" yet where you just type in a prompt and get a custom movie. The compute cost would bankrupt them.
Which brings us back to the studios. Netflix has the money, but they also have shareholders who want to see margins improve. If they can replace a ten-thousand-dollar B-roll shoot with a five-hundred-dollar AI generation, that is a huge win for the bottom line.
It is. But they are playing a very long game. They don't want to destroy their relationship with the talent. If you are a top-tier director, you don't want Netflix telling you to "just use AI" for your transition shots. You want your cinematographer to have control. So, the policy right now is very much "AI is a tool for the artists, not a replacement for the artists." They are positioning it as a way to "enhance" the creative process.
Do you think we will see a "Suno moment" for video? A moment where a completely AI-generated short film goes viral and everyone says, "Uh oh, the professionals are in trouble"?
Honestly, I think we have already had a few of those moments in the indie space. We have seen "trailers" for fake movies that look incredibly real. But a trailer is easy because it is just a series of disconnected, cool-looking shots. Making a coherent twenty-minute story with the same characters, the same lighting, and a logical progression? We aren't quite there yet. The "temporal consistency" of the narrative is the next big hurdle.
That is a great point. AI can give you a "shot," but it can't yet give you a "scene" where characters interact in a meaningful way over several minutes without something looking "off." The "acting" in AI video still feels a bit wooden or "uncanny valley."
Exactly. It is like having a thousand amazing painters but no director. You have all these beautiful frames, but they don't know how to talk to each other. The AI doesn't understand "subtext" or "emotional beats." It just understands "pixels that look like a person crying." There is a difference between a visual of a person crying and a performance that makes the audience cry.
So, what are the practical takeaways for our listeners who might be creators themselves? If they want to use AI video like Daniel does, what should they be looking out for in terms of the "rules"?
First, be aware of the platform you are publishing on. YouTube, for example, now requires you to disclose if you have used "altered or synthetic" content that looks real. This is part of their twenty-four and twenty-five policy updates. If you generate a person saying something they didn't say, or a realistic event that didn't happen, you have to check that box.
And if you don't?
You risk having your video taken down or your account penalized. The platforms are under a lot of pressure to fight deepfakes and misinformation, especially with the elections we have seen recently. So, transparency is your best friend. Don't try to hide it; embrace it as part of your creative process.
Second, don't expect it to be a "one-click" solution for professional work. The best AI video creators right now are using a "hybrid" approach. They generate a base layer with AI, then use traditional tools like After Effects or DaVinci Resolve to clean it up, fix the glitches, and color-match it to their other footage.
That is the "Pro" way to do it. It is actually more work in some ways because you are fighting the AI's mistakes. But it gives you a result that is far beyond what you could do alone. You are using the AI as a "digital clay" that you then sculpt into something usable.
And third, keep an eye on the copyright situation. If you are making something you hope to sell or license later, be very careful about how much of it is "pure" AI. You might find yourself in a position where you can't actually prove you own the rights to your own work. The US Copyright Office has been very firm: no human authorship, no copyright.
That is the big one. We are waiting for a landmark Supreme Court case on this, honestly. Until then, it is the Wild West. If you are a professional, you should be documenting your "human input"—your prompts, your edits, your storyboards—to prove that the AI was just a tool in your hands.
It is funny, Herman. We started this show talking about weird prompts, and now the "prompts" are literally becoming the way we create our reality. Daniel's prompt today is about how we manage that shift. It is a shift from "creating" to "curating."
It is. We are becoming directors of machines. And while that is exciting, I think we both agree that there is something irreplaceable about the human struggle of making art. The fact that it is hard is part of why it is good. The "friction" of reality is what gives art its texture.
I couldn't agree more. If it is too easy, it loses its "weight." But hey, for B-roll of a rainy street in London? I will take the easy way every time. I don't need "soul" in my transition shots; I just need them to look good.
Fair enough. Save the "weight" for the performances and the story. Use the machines for the "heavy lifting" of the background. That seems to be the direction the major studios are heading too. They are using AI to handle the "drudgery" so the humans can focus on the "magic."
So, as we wrap up episode seven hundred, I am looking at the future of this show. Maybe by episode eight hundred, we will be an AI-generated video podcast? We could have digital avatars that look way better than we do in real life.
Don't you dare, Corn. I like my seat. And I think our listeners like the fact that we are real, even if we are talking about things that aren't. There is a "parasocial" connection that you just can't get with a synthetic host. At least, not yet.
You are right. There is no AI that can replicate the brotherly teasing we have perfected over seven hundred episodes. They haven't built a chip powerful enough for that much sarcasm.
They would need a whole data center just to process your jokes, Corn.
Well, thank you, Daniel, for sending in this prompt. It was a great way to mark the big seven-zero-zero. It really highlights how far the conversation has moved since we started. We went from "can AI write a poem?" to "how is Netflix restructuring its entire production pipeline?"
It really has been a wild ride. And the pace is only accelerating. By the time we hit episode eight hundred, we might be talking about AI that can generate entire interactive worlds in real time.
And to everyone listening, whether this is your first episode or your seven hundredth, thank you for being part of this journey with us. We genuinely love exploring these weird corners of technology and culture with you. It is your prompts that keep us going.
We really do. And if you have been enjoying the show, we would really appreciate it if you could leave us a review on your podcast app or Spotify. It actually makes a huge difference in helping other people find "My Weird Prompts." The algorithms are a bit like the AI we talk about—they need data to know what is good.
Yeah, a quick rating or a few words about what you like helps the algorithms realize we are worth listening to. We would love to hear from you. What do you think about AI in movies? Does it ruin the experience for you, or are you excited for the new stories it will enable?
You can find all our past episodes—all seven hundred of them—at myweirdprompts dot com. There is an RSS feed there for subscribers and a contact form if you want to get in touch. We also have a breakdown of the sources we used for today's episode if you want to dive deeper into the Netflix policies.
And you can always reach us directly at show at myweirdprompts dot com. We are on Spotify, Apple Podcasts, and pretty much everywhere else you listen to podcasts.
This has been "My Weird Prompts." I'm Herman Poppleberry.
And I'm Corn. Until next time, keep your prompts weird and your reality... well, as real as you can make it.
Goodbye everyone!
Bye!