You know, Herman, I was looking at some statistics the other day that really put the scale of our industry into perspective. We have something like nine thousand different programming languages currently in existence. Nine thousand. And yet, if you look at the actual production code running the world right now, about eighty or ninety percent of it is written in fewer than twenty languages. It is this incredible paradox. We have built this massive Tower of Babel, but we all seem to huddle in the same few rooms at the base of it.
It is a fascinating dynamic, Corn. Herman Poppleberry here, by the way, for anyone joining us for the first time. And you are right, that paradox is the heart of so much frustration and innovation in software engineering. Our housemate Daniel sent us a prompt earlier today asking about exactly this. He wanted to know why we need so many languages and whether we are finally entering a period of consolidation, or if artificial intelligence is going to make the whole thing even more chaotic.
It is a great question from Daniel because it touches on the fundamental way we interact with machines. When you think about it, a programming language is just a bridge. It is a way to translate human intent into something a silicon chip can execute. But over the last seventy years, we have built thousands of different bridges. Some are made of rope, some are made of steel, some are hyper-specialized for crossing one specific river, and others are massive eight-lane highways. The rise of L L M-based coding agents makes this whole language choice debate feel both more critical and, strangely, completely obsolete at the same time.
And the reason we keep building new ones is usually because we hit a wall with the old ones. Think about the rise of languages like Mojo or Carbon or Gleam. Nobody wakes up and says, I want to spend five years of my life designing a new syntax just for the fun of it. Well, maybe a few people do, but the ones that actually gain traction do so because they solve a very specific pain point. Usually, it is a tension between human readability and machine-optimized execution. We want it to be easy for us to write, but we also want it to run at the speed of light without burning out the processor.
Right, and that tension is where things get interesting. We have this cycle of abstraction. We build a high-level language like Python to make developers more productive, but then we realize it is too slow for certain tasks, so we build a lower-level language like Rust to handle the performance-critical parts. And suddenly, your tech stack is a hybrid. You are managing multiple languages, multiple compilers, and a whole lot of complexity. It makes me wonder if we are creating these languages for developers, or if we are creating them for the compilers that agents will eventually write.
That is the big shift we are seeing right now, isn't it? For decades, the primary customer of a programming language was a human being with a keyboard. The syntax had to be intuitive. The error messages had to be helpful. The documentation had to be readable. But as we move into this agentic era, where artificial intelligence is doing a massive chunk of the heavy lifting, the primary customer is changing. If an agent is writing the code, does it care if the syntax is elegant? Does it care if the language is hard to learn?
Probably not. An L L M treats syntax as a token-prediction problem. To an A I, the difference between writing a complex C plus plus template and a simple Python script is negligible in terms of effort. It is all just patterns in the data. This really lowers the barrier to entry for niche languages. In the past, if you wanted to use a specialized language for a specific task, you had to spend months learning it. Now, you can just tell an agent what you need, and it can generate the code in that niche language instantly.
That is a point we should really dig into. Because traditionally, what kept the big languages dominant was something I call ecosystem gravity. We talked about this back in episode one thousand twenty-one when we discussed why Python became the accidental king of artificial intelligence. It was not because Python is the most efficient language—it definitely isn't—but because it had the libraries, the community, and the massive amount of existing code that everyone could build upon. It was a network effect. If you look at the T I O B E Index or the GitHub Octoverse data from the last few years, the top ten languages consistently account for over eighty percent of all commits. That is a massive amount of inertia.
Right, and if you tried to switch to a better language, the cost was too high. You would lose access to all those libraries. You would have to rewrite everything from scratch. But if an A I can handle the translation, if it can bridge the gap between different ecosystems, does that gravity start to weaken?
I think it does. We are seeing data from the January two thousand twenty-six updates to the major coding agents showing a forty percent increase in what we call polyglot code generation. These agents are not just sticking to one language anymore. They are mixing and matching within a single microservice or a single project. They might write the data processing layer in Rust for speed, the A P I in Go for concurrency, and the glue logic in Python. To a human developer, managing that would be a nightmare. To an agent, it is just another Tuesday.
It is like the cost of switching languages is dropping toward zero. And if the cost is zero, why wouldn't you use the absolute best tool for every tiny sub-task? This brings us back to Daniel's question about consolidation. You would think that with A I, we would all just settle on one perfect language. But it sounds like you are saying the opposite might happen.
I am leaning toward the explosion theory, Corn. Think about the history of engineering. Whenever a new tool makes it easier to create something, we do not see fewer types of things; we see more. When desktop publishing became a thing, we did not end up with one font; we ended up with millions. When the internet made it easy to publish content, we did not end up with one newspaper; we ended up with the entire long tail of the web. I think we are going to see a massive proliferation of hyper-specialized, domain-specific languages—D S Ls—that are designed to be written and read by machines, not humans.
That is a wild thought. So, instead of a lingua franca where we all speak one language, we might be heading toward a world where the language is just an intermediate representation that humans never even see. We give the intent in natural language, and the A I compiles it into some highly optimized, machine-parseable structure that is technically a programming language, but one that no human would ever want to debug.
And that shift changes the definition of readability. For a long time, the holy grail of programming was making code look like English. Think about C O B O L or S Q L. The idea was that even a non-programmer could read it and understand what was happening. But in an agentic future, machine-parseability and debuggability for the A I are becoming more important than human-friendly syntax. If the A I can explain the code to you in plain English, why does the underlying source code need to be readable by you?
Well, the counter-argument there is the black box problem. As conservative thinkers, we generally value transparency and individual agency. If we move to a world where our entire infrastructure is written in machine-optimized gibberish that only an A I can understand, aren't we surrendering a massive amount of control? If something goes wrong, and a human can't look at the source code and say, Aha, here is the logic error, we are in a very vulnerable position.
That is a very valid concern. And it is why I think we will see a split. For critical infrastructure, for things that require high levels of trust and auditability, we will probably stick to highly structured, human-readable languages like Rust or even modernized versions of Java. But for the vast majority of disposable software—the internal tools, the quick scripts, the bespoke applications we talked about in episode eight hundred sixty-four—we might see this explosion of machine-generated code.
It is interesting you mention Java. Most people think of Java as this old, clunky, boilerplate-heavy language. But in a weird way, that boilerplate makes it very easy for an A I to parse and understand. There is very little ambiguity in Java compared to something like JavaScript. Maybe the very things that humans hate about certain languages are the things that make them perfect for A I agents.
You hit the nail on the head. Boilerplate is essentially metadata. It is explicit instruction. Humans hate writing it because it is tedious. But for an agent, more explicit information is usually better because it reduces the chance of hallucination. When we look at how sub-agents delegate tasks—something we explored in episode seven hundred ninety-five—we see that they thrive on clear boundaries and strict types. So, we might actually see a resurgence in strict languages, even if they aren't fun for humans to write.
So, we have this tension. On one hand, A I makes it easy to use thousands of different niche languages. On the other hand, it might favor the most boring and explicit languages for the sake of reliability. Which force wins out? Does the industry consolidate around a few machine-friendly giants, or do we see a million tiny D S Ls?
I think it is a matter of layers. At the very bottom, near the hardware, we will consolidate. We are already seeing that with the dominance of L L V M—the Low Level Virtual Machine. Almost every new language these days, from Swift to Rust to Mojo, compiles down to L L V M intermediate representation. That is already a kind of lingua franca. It is the language that the hardware builders and the compiler writers speak.
And then above that?
Above that, in the application layer, I think it is going to be a total free-for-all. If I am building a specific type of financial model, why use a general-purpose language like Python? Why not use a language specifically designed for temporal logic and financial constraints that an A I can generate for me on the fly? We are moving away from the era of learning a language and into the era of architecting a solution. The language becomes a secondary detail, like the choice of a specific brand of screws when building a house. You care that the house stands up; you don't necessarily care about the serial number on the screws.
That is a big shift in mindset for developers. For decades, your identity was often tied to your language. You were a Java developer or a Ruby on Rails person. If that identity disappears, what replaces it? Is it just Prompt Engineer? I hope not.
I think it becomes Systems Architect. It is about understanding the flow of data, the security boundaries, and the business logic. The A I handles the how—the syntax and the implementation—but the human still has to define the what and the why. And honestly, that is a higher-level, more valuable skill. It is actually a very pro-human shift if you look at it the right way. We are being freed from the drudgery of syntax errors and semicolon hunting so we can focus on the actual logic of the problems we are trying to solve.
I like that perspective. But let's talk about the Cost of Abstraction for a second. Every time we add a new layer, we lose a bit of performance and we add a bit of complexity. If we have agents writing polyglot code across five different languages, how do we handle the technical debt? Imagine a company five years from now trying to maintain a codebase where every microservice is written in a different, A I-optimized niche language. That sounds like a maintenance nightmare.
It is a nightmare if you use nineteenth-century maintenance techniques for twenty-first-century code. If the A I wrote the code, the A I should be the one maintaining it. This is the Deprecation Trap we talked about in episode eight hundred eight. The only way to manage that level of complexity is to have an automated layer that handles the updates, the security patches, and the refactoring. We are moving toward living code that evolves constantly, rather than static codebases that sit in a repository for ten years.
That requires a huge amount of trust in the agents. And it also assumes that the agents will always be there and always be affordable. If you build your entire company on a polyglot, A I-maintained mess, and then the cost of running those agents triples, or the provider goes out of business, you are left with a pile of code that no human on earth can understand. It is the ultimate vendor lock-in.
That is where the conservative principle of decentralization and local control comes in. We should not be relying on a single, centralized God-A I to manage our code. We need local, open-source models that run on our own hardware. We need to own the tools of production. If you have your own agentic stack, you aren't at the mercy of a big tech company's pricing whims. You have the bespoke tools we discussed in episode eight hundred sixty-four.
And that is why the lingua franca might not be a programming language at all, but rather the standardized A P Is and protocols that allow these different agents and languages to talk to each other. If the interface is solid, the implementation details—the language inside the box—matter a lot less.
Right. Focus on the boundaries, not the internals. That is the practical takeaway for anyone listening who is worried about which language to learn or which stack to choose for their next project. Don't worry about whether Python will be around in ten years. Worry about whether your architecture is modular enough that an A I can swap out the Python parts for something better when the time comes.
It is funny, we started by talking about the Tower of Babel, which in the Bible was a story about human hubris leading to confusion. But in this version, the confusion of languages isn't a punishment; it is an opportunity. We are using A I to translate the confusion into a new kind of order.
That is a great way to put it. The Tower isn't one single language; it is the collective body of human knowledge and intent. The fact that we have nine thousand ways to express that intent is a testament to our creativity and the diversity of our needs. A I just gives us the universal translator we have always wanted.
So, to Daniel's question: are we consolidating? At the bottom, yes. At the top, absolutely not. We are entering an era of unprecedented linguistic diversity in code, but with a layer of A I that makes that diversity manageable.
And I think that is a good thing for innovation. It means we aren't stuck with the technical debt of the past. We can experiment with new ideas, new paradigms, and new optimizations without the crushing weight of having to retrain millions of developers. If a new language like Mojo can offer a hundred-times speedup for A I workloads, we can adopt it almost overnight because the agents will do the heavy lifting of the transition.
It also means that the barrier to entry for being a developer is changing. It is no longer about memorizing syntax; it is about clear thinking. If you can't describe a process logically in your own native language, you won't be able to code it with an A I either. The cognitive load, as we discussed in episode nine hundred thirty-seven, is shifting from how do I say this to what am I actually trying to say.
Which is a much harder problem, frankly. Syntax is easy; logic is hard. We might find that we have fewer coders but we need many more logicians and architects. People who can see the big picture and understand the second-order effects of the systems they are building.
And that brings us back to the geopolitical angle. The countries and cultures that prioritize rigorous, logical thinking and a deep understanding of systems are the ones that will thrive in this agentic era. This isn't just about who has the best programmers; it is about who has the best architects. America has a massive lead here because of our culture of innovation and our dominance in the A I space, but we can't be complacent.
We need to be teaching our kids how to think in systems, how to evaluate evidence, and how to direct these powerful tools. If you just treat A I as a magic box that gives you code, you are going to end up with a mess. You have to be the one in the driver's seat.
Well, I think we have covered a lot of ground here. From the nine thousand languages currently in existence to the rise of agent-driven polyglot architectures. It is a lot to take in, but it is an incredibly exciting time to be in this field.
It really is. And before we wrap up, I just want to say, if you are finding these deep dives helpful, please do leave us a review on Spotify or whatever podcast app you use. It genuinely helps the show grow and helps other people find these conversations. We have been doing this for over a thousand episodes now, and the community feedback is what keeps us going.
Yeah, we really appreciate it. And remember, you can find all our past episodes, including the ones we mentioned today about Python, sub-agent delegation, and the death of S a a S, over at myweirdprompts dot com. There is a search bar there where you can look up any topic we have covered.
Thanks again to Daniel for the prompt. It really got us thinking about the future of our craft.
Definitely. So, final thoughts? The best language is the one that gets the agent to the desired output with the fewest hallucinations and the most reliability. Don't get too attached to your syntax; get attached to your logic.
Well said, Corn.
Alright, that's it for this episode of My Weird Prompts. I'm Corn.
And I'm Herman Poppleberry. Thanks for listening.
We'll talk to you next time.
See ya.
You know, I just realized I didn't mention my sloth-like nature once this episode.
And I didn't mention being a donkey. We're getting better at this, Corn.
Or maybe we're just getting more focused.
Either way, it was a good one.
Agreed. Goodbye, everyone!
Bye!
One last thing, Herman. Do you think we'll ever reach a point where the A I just refuses to write in a certain language because it thinks it's too ugly? Like, sorry Corn, I'm not writing any more C O B O L, it's beneath my dignity.
Honestly, with the way these models are being trained on human preference, I wouldn't be surprised. We might accidentally bake our own aesthetic biases into the very machines that are supposed to be objective.
That is a terrifying and hilarious thought. The A I has decided that P H P is a human rights violation.
Let's save that for episode two thousand.
Deal. Bye for real this time!
Bye!
Actually, wait, I just thought of one more thing about the T I O B E index...
Corn, the credits are rolling!
Right, right. Next time!
Next time.
Okay, really leaving now.
Good.
Bye.
Bye.
So, about that T I O B E index...
Corn!
Sorry! I'm going.
Laughs. See you at home, brother.
See you at home. And tell Daniel we liked the prompt.
Will do.
Fading out. I still think Python is going to outlast us all...
Fading out. We'll see, Corn. We'll see.
It's the ecosystem, Herman! The ecosystem!
Distantly. Go to sleep, Corn!
Very distantly. Sloths don't sleep, they just move slowly...
Faintly. That's not even true...
Whispering. It's true in my heart...
Silence.
Whispering. My weird prompts dot com...
Laughing in the distance. Stop it!
Silence.