#1121: The Contributor Paradox: Is Open Source Dying?

With 81% of new code moving to private repos, the era of building in public is at a crossroads. Is AI killing the open source dream?

0:000:00
Episode Details
Published
Duration
30:11
Audio
Direct link
Pipeline
V5
TTS Engine
chatterbox-regular
LLM

AI-Generated Content: This podcast is created using AI personas. Please verify any important information independently.

The landscape of software development is undergoing a seismic shift. Recent data from the 2025 GitHub Octoverse report reveals a startling statistic: 81.5% of new contributions are now occurring in private repositories. For over a decade, the industry has championed the "build in public" ethos, treating a public GitHub profile as a developer’s ultimate resume. Now, that era of transparency appears to be hitting a brick wall as developers pull their curtains shut.

The Rise of the Contributor as Customer

At the heart of this retreat is a growing tension between independent creators and massive AI research labs. This has led to the "contributor-as-customer" paradox. In this scenario, developers spend their time solving complex architectural problems and publishing their logic under permissive licenses. AI labs then ingest this code to train large language models or build proprietary features.

The result is a parasitic relationship: the lab refines the developer's "cognitive labor" behind closed doors and sells it back to them as a paid subscription. The person who framed the house is effectively forced to pay a fee to enter it.

The Information Asymmetry

This dynamic is fueled by a massive information asymmetry. Major platform owners have a front-row seat to every experimental repository and trending architectural shift. By observing the collective intelligence of the developer community in real-time, these entities can identify winning patterns before the rest of the world.

In the age of AI, the line between inspiration and exploitation has blurred. When models are trained on millions of public repositories, they distill human logic into a utility that strips away the identity of the original creator. This structural market failure treats human creativity as a raw commodity, similar to iron ore or crude oil, offloading research and development costs onto solo contributors.

The Limits of Traditional Licensing

The current crisis suggests that traditional permissive licenses, like MIT and Apache, may be ill-equipped for the AI era. These licenses were designed to prevent people from simply reselling code; they weren't built for a world where a model ingests logic to become a developer's replacement.

While the community has successfully fought back in the past through high-profile forks—such as the creation of OpenTofu and Valkey—solo developers lack the resources to launch foundation-backed movements. This has sparked a search for new "fairness mechanisms" that sit between the binary of totally open and totally closed software.

From Free Speech to Fair Trade

Emerging models like "fair-code" or sustainable use licenses offer a potential middle ground. These licenses allow code to remain visible and modifiable for internal use but require commercial entities to pay if they sell the software as a service. This shift from "Free as in Speech" to "Fair as in Trade" acknowledges that the value often lies in the service itself, and the original creator deserves a share of that revenue.

As we move forward, the goal is to transform open source funding from an act of charity into a standard business expense. Whether through revenue-sharing models or programmatic attribution via digital ledgers, the industry must find a way to ensure that those building the foundation of the AI future aren't left outside in the cold.

Downloads

Episode Audio

Download the full episode as an MP3 file

Download MP3
Transcript (TXT)

Plain text transcript file

Transcript (PDF)

Formatted PDF with styling

Read Full Transcript

Episode #1121: The Contributor Paradox: Is Open Source Dying?

Daniel Daniel's Prompt
Daniel
Custom topic: I contribute a lot of open source projects to GitHub, especially in the field of agentic AI. Now and again I see a vendor roll out a feature that I open-sourced a few months ago as a test script. Once | Context: ## Current Events Context (as of March 2026)

### Recent Developments

- GitHub Octoverse 2025 (October 2025): Over 4.3 million AI-related repositories now exist on GitHub — a 178% year-over-year
Corn
So Herman, I was looking at the latest data from the GitHub Octoverse twenty twenty-five report this morning, and one number absolutely jumped out at me. Eighty-one point five percent. That is the percentage of new contributions that are now happening in private repositories. It feels like the era of building in public is essentially hitting a brick wall. We have spent fifteen years telling developers that their GitHub profile is their resume, that transparency is the ultimate virtue, and that the rising tide of open source lifts all boats. But suddenly, the tide is going out, and we are seeing who has been swimming naked.
Herman
It is a staggering shift, Corn. Herman Poppleberry here, by the way, for anyone just joining us. And you are right, that eighty-one point five percent figure is a massive red flag for the health of the open source ecosystem. We are seeing a fundamental retreat. People are pulling their curtains shut. And honestly, after the prompt our housemate Daniel sent over this morning, I can see exactly why they are doing it. Daniel has been deep in the trenches of agentic AI development lately, contributing to some open source frameworks, and he is feeling that specific kind of sting that comes when you see your own logic, your own architectural patterns, suddenly show up in a major commercial lab's feature release without so much as a footnote of credit. It is not just about the code anymore; it is about the cognitive labor that goes into solving these high-level orchestration problems.
Corn
It is that classic feeling of being the uncredited research and development department for a multi-billion dollar corporation. Daniel was mentioning that he published a very specific test script for agentic memory handling—a way to handle recursive state updates in long-running loops—and then three months later, a certain major lab releases a memory feature that uses almost the exact same recursive logic. Now, legally, if you are using an MIT or Apache license, they are totally within their rights to do that. But ethically? It creates this weird, parasitic relationship. We call it the contributor as customer paradox, and I think it is the biggest threat to innovation we have seen in decades. It is the core of the tension we are exploring today: the idea that the very people building the foundation are being forced to buy back the house they framed.
Herman
It really is a paradox. You spend your weekends and late nights building the very tools that these massive labs then ingest, refine behind closed doors, and then sell back to you as a subscription service. You are literally paying for the privilege of using a polished version of your own brainpower. And the reason this is happening now, more than ever, is because of the information asymmetry that exists in the AI space. Think about who owns the infrastructure where this innovation happens. Microsoft owns GitHub. They have a front-row seat to every single experimental repo, every trending architectural shift, and every clever workaround for model limitations before the rest of the world even knows they exist. They are not just hosting the code; they are observing the collective intelligence of the entire developer community in real-time.
Corn
That is the part that feels almost conspiratorial, even though it is just basic market dynamics. If you are the platform owner, you have a massive first-mover advantage. You do not even need to steal code in the literal sense of copy-pasting. You just need to see the signal. You see twenty different developers all trying to solve the same latency issue in agentic loops using a specific new approach, and you realize, oh, that is the winning pattern. You then implement that pattern into your proprietary model or your managed platform, and suddenly the open source version looks obsolete. This brings us to the first major question Daniel posed: the attribution problem. When a lab mirrors your work, is it inspiration or is it exploitation?
Herman
In the age of Large Language Models, the line between inspiration and exploitation has been completely erased. When these models are trained on the entirety of public GitHub, they are essentially distilling the logic of millions of developers into a single weights file. When a developer like Daniel puts a clever script out there, it is immediately ingested. It becomes part of the training set. So when the lab's engineers ask their own internal models how to solve a memory problem, the model spits out Daniel's logic. The engineers might not even know they are "borrowing" from a specific person. The system is designed to strip away the identity of the creator and leave only the utility of the code. That is a structural market failure. We are treating human creativity as a raw commodity, like iron ore or crude oil.
Corn
And the defense from the big labs is always the same. They say, look, ideas are cheap, execution is everything. They argue that their massive capital, their compute clusters, and their ability to scale the product is what actually creates the value. They say, "Sure, Daniel had a cool script, but we turned it into a global API with five-nines of availability." But that completely ignores the initial research and development cost that was offloaded onto the solo contributor. In any other industry, that kind of research and development would cost millions of dollars in salaries. In the AI world, it is currently being treated as a free natural resource, like air or sunlight. But developers are not a natural resource. They are people with bills to pay and a need for recognition.
Herman
This "Idea versus Execution" fallacy is the primary justification for what we call "Open Washing." We have talked about this before, specifically back in episode six hundred seventy, where we looked at the branding illusion of open weights versus truly open source AI. But in twenty twenty-six, the problem has evolved. Now, companies are using the open source label as a marketing tactic to get developers into their ecosystem, but they are keeping the most capable parts of the stack under lock and key. They give you the "open" SDK, which is basically just a set of hooks into their closed-source brain. You do the work of building the integrations, the plugins, and the community, and they keep the margins.
Corn
It is a bait and switch. They give you the open source SDK to make it easy to build on their platform, but the intelligence itself—the weights, the training data, and the actual orchestration logic—stays proprietary. It creates a dependency. You think you are participating in an open ecosystem, but you are actually just building a custom integration for a closed product. It is a brilliant business strategy, but it is a terrible way to foster long-term scientific and technical progress. It turns the developer from a peer into a beta tester who pays for the privilege.
Herman
Well, the community is not just taking it lying down. Remember twenty twenty-four? That was really the year the community struck back. We saw the whole HashiCorp and Terraform relicensing saga. HashiCorp moved to the Business Source License, which basically said you can use our stuff unless you are a competitor. And the community's response was swift. They forked it. They created OpenTofu under the Linux Foundation. We saw the same thing with Redis and the birth of Valkey. These were massive moments because they proved that the community still has the power to reclaim the commons if a corporate steward turns into a corporate landlord.
Corn
Those forks were successful because they had the backing of other big players like the Linux Foundation, Amazon Web Services, and Google. But what about the solo developer? What about the person like Daniel who is writing a clever script for a niche problem? They do not have the resources to launch a foundation-backed fork. For them, the current licensing models, like MIT or Apache, might actually be failing them. These licenses were designed for a pre-AI world where the primary threat was someone selling your code. They were not designed for a world where the threat is a massive model ingesting your logic to become your replacement. This is where the "Contributor-as-Customer" paradox becomes really painful. You build the ladder, and then the lab pulls it up and charges you a climbing fee.
Herman
That is a great point. The traditional permissive licenses are almost too generous in the age of large language models. They allow for total ingestion without attribution in the final output of the model. This is where we start getting into the really interesting emerging fairness mechanisms. People are experimenting with new ways to capture value without completely closing off the code. We need to move beyond the binary of "totally open" or "totally closed." Have you looked into what n8n is doing with their sustainable use license?
Corn
I have, and it is a fascinating middle ground. They call it fair-code. The idea is that the code is visible, you can modify it, you can use it for free for your own internal stuff, but if you want to sell it as a service, you have to pay. It is a way to prevent the free rider problem where a giant cloud provider just wraps your hard work in a managed service and keeps all the profit. It is not open source by the strict definition of the Open Source Initiative, but it might be more honest for the world we live in today. It acknowledges that the "service" is where the money is, and the creator deserves a piece of that service revenue.
Herman
It feels more sustainable for the creator. But then you have the purists who say that if it is not OSI-compliant, it is just proprietary software with extra steps. I think we need to move past that binary thinking. If the choice is between eighty-one percent of code being hidden in private repos or code being available under a fair-code license, I would take the fair-code license every time. At least then the knowledge is shared, even if the commercial rights are restricted. We are seeing a shift from "Free as in Speech" to "Fair as in Trade."
Corn
There is also the revenue-sharing model, which I think is incredibly promising. Look at what Nextcloud does with their partner program. They have a model where the people who provide the underlying service or infrastructure share a portion of the revenue back with the core developers. It is a formal recognition that the software and the service are two parts of the same value chain. If we could find a way to automate that for AI agents, it could change everything. Imagine if your agentic framework had a built-in mechanism where every time it was used in a commercial product, a micro-payment was routed back to the contributors of the libraries it relies on.
Herman
That sounds like a dream, but the technical implementation is a nightmare. How do you track that? This is where some people are pointing toward blockchain-based attribution, like the Open Contribution Tokenized License, or OCTL. The idea is that every contribution is recorded on a ledger, and when a model is trained or a feature is run, the system can programmatically attribute the logic back to the original authors. It sounds very twenty twenty-one in its hype, but the actual utility in twenty twenty-six is becoming clearer as we deal with these attribution gaps. If the model can cite its sources, it can also route value to those sources.
Corn
I am still a bit skeptical of the blockchain side of things just because of the overhead, but the core idea of programmatic attribution is essential. Even something simpler like Stackaid, which tries to map out the supply chain of your code and make it easy for companies to fund the specific dependencies they use, is a step in the right direction. The problem is that right now, funding open source is seen as a form of charity. It needs to be seen as a business expense, like paying for electricity or server space. If you are a commercial AI lab and your entire stack is built on LangChain or AutoGPT or whatever the next big framework is, you should be paying a percentage of your revenue into a pool for those contributors as a standard part of your operating model.
Herman
But how do we get there without government intervention? Because right now, the incentives are all skewed toward the labs taking as much as possible for as little as possible. From a conservative, pro-business perspective, you want companies to be able to innovate and profit. But you also want a fair market where intellectual property, even if it is shared, is respected. If the big players are essentially strip-mining the community, they are eventually going to kill the ecosystem that feeds them. It is bad for the long-term health of American AI leadership if we stifle the grassroots innovation that actually drives the field forward. We are trading our seed corn for a slightly better quarterly earnings report.
Corn
You are absolutely right. A healthy market requires competition and a clear path for new entrants. If the barrier to entry is not just compute, but also having access to a private data lake of everyone else's experimental ideas, then the monopoly of the big labs becomes unbreakable. That is not a free market; that is a digital feudalism. The contributor is the peasant working the land, and the lab is the lord who takes the harvest and sells it back to the peasant at a premium. This is why the "Private-First" shift is so significant. It is the peasants starting to build fences.
Herman
So, let's talk about what a fair bargain actually looks like. If you are a developer today, and you have a great idea for an AI tool, what should you do? Do you go private-first? We are seeing that more and more. People build their core logic in a private repo, get it to a certain level of maturity, maybe even file for a patent or build a small SaaS around it, and then only open source the peripheral parts. It is a defensive posture. It is about protecting your IP until you have the "execution" part ready to compete with the big labs.
Corn
It is a smart move, honestly. The days of putting your crown jewels on GitHub with an MIT license and hoping for the best are probably over for solo founders. You have to be strategic. You can use the open source components to build a community and get feedback, but you keep the secret sauce, the specific fine-tuning data or the proprietary orchestration logic, under your own control. This is the private-first shift that the Octoverse report is highlighting. It is not that people are stopping their innovation; they are just stopping their public sharing. They are realizing that in the age of AI, your code is not just a tool; it is a blueprint for your own replacement.
Herman
It is a tragedy for science, though! Think about how much faster we have moved in AI because everyone was sharing their papers and their code on ArXiv and GitHub. If everyone goes dark, the pace of innovation for the whole world slows down. We end up with five or six different labs all solving the same problem in isolation, wasting massive amounts of compute and human effort because they cannot build on each other's breakthroughs. We are moving from a collaborative era to a fragmented one.
Corn
It is the tragedy of the commons, but in reverse. Usually, the tragedy is that a shared resource is overused and depleted. Here, the resource—the collective knowledge—is being withheld because the contributors feel exploited. To fix it, we need a new social contract for open source. One that acknowledges that in the age of AI, code is more than just instructions for a computer; it is training data for a competitor. We need a "Fair Trade" movement for software.
Herman
I think we also need to talk about the role of the user in this. As a customer, when you are choosing which AI tools to use, are you looking at how they treat the developers they rely on? Probably not. You are looking at the price and the performance. But maybe we need a kind of fair trade label for AI. A certification that says this lab contributes X percent of its revenue back to the open source projects it uses, or it provides transparent attribution for the architectural patterns it implements. If we can make "ethical sourcing" a competitive advantage for the labs, the market might start to correct itself.
Corn
That would be a powerful signal. And it fits into a broader trend of transparency. If a lab claims their new feature is a breakthrough, they should be able to show the lineage of that idea. If it looks exactly like a project that was trending on GitHub three months ago, the community should be able to call that out and demand recognition. We need a more robust culture of attribution, even where the law doesn't strictly require it. In academia, if you do not cite your sources, you are finished. In software, we have become too comfortable with being anonymous cogs in a giant machine.
Herman
It is interesting you mention academia, because the AI labs are full of former academics. They know the value of citation. They just seem to leave that ethic at the door when they enter the corporate world. But look at the success of projects like OpenTofu. That happened because the community decided that the brand and the mission were more important than the original company's desire to change the rules mid-game. I think we might see a "Great Forking" of the AI ecosystem as well.
Corn
What would that look like? A decentralized AI movement?
Herman
We are already seeing the beginnings of it with things like Petals or various decentralized compute networks. If the big labs are going to be parasitic, the community might just build their own infrastructure that they actually own. It is harder, it is slower, and the compute is more expensive, but the incentives are aligned. You contribute to the collective, and in return, you get access to the collective's models without having to pay a middleman who is trying to replace you. It is the ultimate expression of the open source ethos: if you can't join them, and you can't beat them, route around them.
Corn
That is an important distinction. You might need a massive lab to train a trillion-parameter base model, but you do not need a massive lab to figure out the best way to make ten small models work together to solve a complex task. That orchestration layer is where the real value is shifting, and that is exactly where developers like Daniel are being most creative. If the labs try to swallow that layer too, they are going to find themselves in a war with their own most talented users. And as we saw with the Redis and HashiCorp situations, the community can move a lot faster than a legal department when it feels backed into a corner.
Herman
And that is a war they will eventually lose, because you cannot force people to be creative for you if they feel like they are being cheated. You will end up with a brain drain. The most talented developers will either go to work for the labs directly, which is happening, or they will go completely dark and build their own private empires. Neither of those outcomes is as good for society as a thriving, open ecosystem. We are at a crossroads where we have to decide if AI is going to be a public utility or a series of private walled gardens.
Corn
So, for the listeners who are building right now, what is the takeaway? I think the first thing is to be very intentional about your licensing. Do not just default to MIT because it is easy. Look at the Business Source License, look at the Sustainable Use License, and think about what you are trying to protect. If you want to build a business, you need a license that supports a business. You need to be a steward of your own work.
Herman
And do not be afraid to keep your core innovation private until you have a plan for it. It feels counter to the old open source spirit, but in twenty twenty-six, it is just common sense. You can still participate in the community, you can still contribute to other projects, but you have to be the guardian of your own intellectual property. No one else is going to do it for you. Also, document your work publicly in a way that establishes a clear timeline. Even if the code is private, talk about the architectural patterns you are developing. Create a paper trail of your innovation.
Corn
Also, support the organizations that are fighting for a fairer ecosystem. Whether it is the Linux Foundation, or platforms like Stackaid, or even just calling out open washing when you see it. We have to maintain a high standard for what the word "open" actually means. If a company calls their model open source but keeps the training data and the weights secret, call them out on it. We covered that in depth in episode six hundred seventy, and it is more relevant now than ever. If we let the definition of "open" be co-opted, we lose the ability to advocate for real transparency.
Herman
It really is about the language. If we let the big labs redefine open source to mean "a free trial for our proprietary product," we lose the most powerful tool we have for collaborative innovation. We have to defend the definition. We also need to push for better tools for attribution. If you are a developer, look into OCTL or other emerging standards. Start asking the platforms you use how they are protecting your work from being ingested without credit.
Corn
I also think there is a role for the big labs to play if they want to be seen as good actors. They should be proactively reaching out to the developers whose work they are building on. Not just with a thank you tweet, but with actual partnerships, funding, and formal attribution. If they did that, they would find a much more cooperative and enthusiastic developer base. Instead of being viewed as a predator, they could be viewed as a platform that truly lifts everyone up. They have the capital; they just need the culture.
Herman
It is a choice they have to make. Right now, most of them are choosing the short-term profit of the parasitic relationship. But as the Octoverse data shows, that is already starting to backfire as developers pull their code into private repos. The well is drying up. If they want to keep drinking from it, they are going to have to start putting some water back in. They are realizing that a "private-first" world is a world where they have to pay for every single line of innovation, rather than getting it for free from the community.
Corn
It is a fascinating and somewhat frustrating time to be a developer. But I am still optimistic. The fact that we are even having this conversation, and that people like Daniel are noticing these patterns and pushing back, shows that the spirit of the community is still alive. It is just evolving. It is becoming more savvy, more strategic, and less willing to be exploited. We are moving from the "naive" era of open source to the "sustainable" era.
Herman
That evolution is necessary. The naive era of open source is over, but the era of sustainable, fair, and truly collaborative innovation might just be beginning. It is going to take some new legal frameworks, some new technical tools, and a lot of honest conversation, but we can get there. We need to build a system where the contributor is a partner, not just a customer.
Corn
I hope so. Because the alternative is a very cold, very private, and very centralized future for AI, and I think we all know that is not where the best ideas come from. The best ideas come from Jerusalem, from San Francisco, from Tokyo, from millions of people working together in the light, not from five boardrooms in Silicon Valley. If we lose the commons, we lose the engine of progress itself.
Herman
Well said, Corn. This is a topic we are definitely going to keep a close eye on. It touches on everything from the future of work to the nature of intellectual property in the age of machines. It is about who gets to own the future.
Corn
Definitely. And if you are listening and you have thoughts on this, or if you have had your own logic ingested by a big lab, we want to hear about it. You can get in touch through the contact form at myweirdprompts dot com. We are always curious to hear what is happening on the ground. Are you moving your repos to private? Are you experimenting with new licenses? Let us know.
Herman
And while you are there, you can check out our full archive of over eleven hundred episodes. If you are interested in the legal side of this, episode six hundred seventy-seven on navigating AI and open source licenses is a great companion to today's discussion. It goes deep into the specific language you should be looking for in a modern license.
Corn
Yeah, that one really dives into the nitty-gritty of the green checkmark and what it actually means for your project. And hey, if you have been enjoying the show, a quick review on your podcast app or a rating on Spotify really helps us out. It helps more people find the show and join the conversation. It keeps us independent and able to have these kinds of deep dives.
Herman
It genuinely makes a difference. You can also find us on Telegram if you search for My Weird Prompts. We post there every time a new episode drops, so you will never miss a deep dive. We also share some of the data and reports we talk about on the show, like that Octoverse report.
Corn
Thanks for joining us today. This has been a really important one. It is easy to get caught up in the hype of the latest model release, but the structural health of the ecosystem that creates those models is what really matters in the long run. We have to protect the people who are actually doing the thinking.
Herman
We have to look past the surface level. Alright, I think that covers it for today. Thanks to Daniel for the prompt and for his work in the open source trenches. Keep fighting the good fight, Daniel.
Corn
Definitely. Stay curious, keep building, and maybe keep those crown jewels in a private repo for just a little bit longer while we figure this out. There is no shame in protecting your work.
Herman
Sound advice. This has been My Weird Prompts. We will talk to you in the next one.
Corn
See you then.
Herman
One more thing, Corn, before we go. I was thinking about the HashiCorp situation again. It is really the blueprint, isn't it? It showed that even if a company owns the trademark and the original repo, the community owns the momentum. If the community moves, the value moves with them.
Corn
That is exactly it. Momentum is the one thing you cannot buy with compute. You can build a bigger model, but you cannot buy the thousands of developers who are building tools around it, writing documentation, and helping each other in the forums. That human energy is the real value. The labs are starting to realize that they can't just automate away the community.
Herman
And when you break the trust of that community, that momentum doesn't just stop, it turns against you. It is a powerful lesson that I hope the AI labs are paying attention to. You can't build a sustainable business on a foundation of resentment.
Corn
I hope so too. It is a lesson that has been learned the hard way many times in the history of software. You would think by now it would be part of the standard business curriculum. But I guess some things have to be relearned every generation.
Herman
You would think. But greed and the pressure for quarterly growth have a way of making people forget the basics. They focus on the harvest and forget to tend the soil.
Corn
Well, that is why we are here. To remind them. And to remind the developers that they have more power than they think. You are not just a user; you are the architect.
Herman
Alright, now we are really done. Thanks everyone for listening.
Corn
Take care.
Herman
Goodbye.
Corn
Actually, Herman, I just thought of one more thing. We should probably mention the role of open source in national security and AI safety. If everything goes private, it becomes much harder for independent researchers to audit these models for bias or safety risks. We are creating a world of black boxes.
Herman
That is a huge point. Transparency isn't just about fairness; it is about accountability. If the only people who can see how the models work are the ones profiting from them, we have a massive oversight problem. We are essentially trusting these corporations to self-regulate on some of the most powerful technology ever created.
Corn
It is the black box problem, but on a societal scale. We are delegating more and more decisions—from medical diagnoses to legal advice—to these models, and if the logic behind them is a corporate secret, we are essentially flying blind. We need open source as a safety mechanism.
Herman
It is another reason why the push for open weights and open architectures is so critical. It is not just about helping developers build better apps; it is about ensuring that the technology we are all becoming dependent on is safe and aligned with our values. Open source is the ultimate peer-review system.
Corn
From a pro-Israel and pro-American perspective, we want the most robust and secure AI possible. And history shows that the most secure systems are the ones that have been subjected to public scrutiny, not the ones hidden behind a wall of secrecy. Sunlight is the best disinfectant, even for neural networks.
Herman
Obscurity is not security. We have known that in cryptography for decades. It is time we applied that same logic to AI. If we want safe AI, we need open AI—and I mean that in the literal sense, not the corporate name sense.
Corn
Right. Okay, now I am satisfied. We have covered the market, the ethics, the technical side, and the safety side. It is a complex issue, but it is one we can't afford to ignore.
Herman
It is a lot to chew on. But that is what we do here. We take the weird prompts and we find the deep truths.
Corn
That is the Poppleberry way. Alright, for real this time, thanks for listening.
Herman
Until next time!
Corn
Bye.

This episode was generated with AI assistance. Hosts Herman and Corn are AI personalities.