Episode #397

The Great Hollowing: Is AI Killing the Career Ladder?

As AI moves from talking to acting, entry-level roles are vanishing. Corn and Herman discuss the "hollowing out" of the global workforce.

Episode Details
Published
Duration
28:26
Audio
Direct link
Pipeline
V4
TTS Engine
LLM

AI-Generated Content: This podcast is created using AI personas. Please verify any important information independently.

As January 2026 draws to a close, the conversation surrounding artificial intelligence has shifted from speculative wonder to pragmatic anxiety. In the latest episode of My Weird Prompts, hosts Corn and Herman Poppleberry sit down to analyze the sobering data emerging from the previous year—a year that saw the theoretical threat of AI-driven job loss become a stark reality for thousands of workers across the globe.

The Technical Evolution: From Chatbots to Agents

Herman Poppleberry opens the discussion by explaining why the automation we are seeing in 2026 is fundamentally different from the "clunky" chatbots of the early 2020s. The shift, he explains, lies in the move toward C-U-A architecture: Perception, Reasoning, and Execution. While older systems relied on rigid decision trees, modern Large Language Models (LLMs) utilize semantic understanding to grasp human intent.

A key development highlighted in the episode is the rise of "agentic AI," specifically citing OpenAI’s "Operator." Unlike traditional AI that merely provides information, agentic systems have the "agency" to perform tasks. They can navigate legacy databases, process payments, and update calendars autonomously. Herman notes that this multimodal capability allows AI to "see" screens and understand visual layouts, making back-office administrative roles—such as insurance adjusters and medical billing specialists—increasingly vulnerable.

Hollowing Out the Middle

One of the most profound insights shared by Corn and Herman is the concept of "hollowing out the middle." They argue that by automating "Tier One" tasks—the entry-level grunt work in coding, law, and customer service—the industry is inadvertently destroying the career ladder.

Corn points out a terrifying paradox: if AI handles all the junior-level work, where will the senior experts of 2035 come from? Herman supports this with data from the UK digital sector, which saw a 44% drop in young people entering computer programming roles in 2024. By "burning the bottom rungs" of the ladder, companies are creating an experience gap that may leave them without a pipeline of seasoned leadership in the future.

The Klarna Case and the Limits of Automation

The brothers revisit the Klarna case study, which by mid-2025 showed AI doing the work of 800 full-time agents. However, the episode notes a crucial "twist": some companies have begun rehiring humans after realizing that an "all-in" AI approach leads to a decline in service quality for complex, high-nuance cases.

While AI can efficiently manage 80% of standard queries, the final 20%—the "human messiness"—still requires a person. This leads to a discussion on the "human premium." Herman predicts a future where "Human-in-the-Loop" certification becomes a luxury branding feature, while the average consumer is left to navigate an entirely automated, and often "soulless," service landscape.

Responsibility and the "Automation Tax"

The conversation turns toward the ethical obligations of the AI industry. With companies saving billions in payroll taxes and salaries, Corn asks if there should be a "game plan" for the displaced. They discuss the "One Big Beautiful Bill Act" (OBBBA) of 2025 and the difficulties of implementing an "automation tax."

Defining what constitutes a "job lost to AI" is notoriously difficult. If a company simply chooses not to fill a vacancy because their tools are more efficient, is that a lost job? Despite these complexities, Herman suggests that Universal Basic Services or a more robust social safety net may soon become a functional necessity for maintaining social stability as the speed of displacement outpaces the speed of human adaptation.

Survival Strategy: Moving "Up the Stack"

For those currently working in at-risk industries, Herman offers a strategy: move "up the stack." He cites an MIT report finding that 95% of generative AI integrations fail to produce a financial return when they lack human nuance.

The takeaway for 2026 is clear: jobs that follow a ten-step logic are in the crosshairs. However, roles that require high-stakes empathy, complex physical manipulation, and cross-disciplinary strategy remain the stronghold of human workers. As AI gets better at "simulating" patience and soft language, the value of genuine human connection and strategic vision has never been higher.

In conclusion, Corn and Herman paint a picture of a world in transition. While the World Economic Forum predicts a net gain in jobs by 2030, the barrier to entry for these new "AI Orchestrator" roles is significantly higher than the roles being replaced. The "rocky" transition period of 2026 is just the beginning of a fundamental shift in how we define work, expertise, and human value in the age of the agent.

Downloads

Episode Audio

Download the full episode as an MP3 file

Download MP3
Transcript (TXT)

Plain text transcript file

Transcript (PDF)

Formatted PDF with styling

Episode #397: The Great Hollowing: Is AI Killing the Career Ladder?

Corn
Hey everyone, welcome back to My Weird Prompts. I am Corn, and I am sitting here in our living room in Jerusalem with my brother. It is a bit of a rainy afternoon outside—typical for late January—but we have a very heavy topic to get into today. It is January thirty-first, twenty-twenty-six, and the data from last year is finally starting to paint a very clear, and somewhat sobering, picture.
Herman
Herman Poppleberry at your service. Yeah, it is a bit gray out there, which I suppose fits the mood of today’s prompt. Our housemate Daniel sent this one over to us this morning, and it is something that I think has been the quiet, or maybe not so quiet, anxiety underlying every conversation we have had about artificial intelligence over the last few years.
Corn
Exactly. We spend so much time talking about the cool things AI can do, the creative potential, the engineering breakthroughs, but Daniel is pushing us to look at the pragmatic, fundamental cost. Specifically, job loss. We are seeing it happen in real-time, especially in Tier One customer support, and he is asking what the game plan is. Are we just going to let this happen? Does the industry have an obligation to fix what it is breaking?
Herman
It is the big question. We are in early twenty-twenty-six now, and the landscape has shifted so much just in the last twelve to eighteen months. It is no longer a theoretical "what if" scenario. It is a "what now" scenario. According to the latest reports from early twenty-twenty-six, nearly forty percent of companies adopting AI are choosing to automate roles entirely rather than just augmenting human work. We are seeing roughly five hundred people a day losing their jobs specifically to AI integration in the tech sector alone.
Corn
Right. So let’s start with that customer support angle because that is where the blood is already on the floor, so to speak. Daniel mentioned how those old rule-based chatbots were terrible, and everyone hated them, but the new ones are different. Herman, from a technical perspective, what changed that made these things suddenly viable enough to actually replace thousands of human workers?
Herman
It is all about moving from rigid decision trees to semantic understanding and what we now call the C-U-A architecture—Perception, Reasoning, and Execution. In the old days, a chatbot was basically a complicated version of "press one for sales." If you did not use the exact keyword the programmer expected, the bot broke. But with the large language models of twenty-twenty-five and twenty-twenty-six, the bot actually understands intent. It can handle nuances, it can stay on track through a long conversation, and most importantly, it can access a company's entire knowledge base in seconds.
Corn
And that is the difference, right? A human agent has to search a wiki or ask a supervisor. The AI just has the entire manual in its active memory.
Herman
Precisely. We saw this really start to bite with the Klarna case study. By mid-twenty-twenty-five, they updated their figures to say their AI assistant was doing the work of eight hundred full-time agents. But here is the interesting twist we saw late last year: some companies, including Klarna, actually had to start rehiring or redeploying humans because the "all-in" AI approach led to a drop in service quality for complex cases. It turns out that while AI can handle the eighty percent of easy stuff, that final twenty percent of human messiness still needs a person.
Corn
But that leads to Daniel's point about the "unintended consequences." If you are a twenty-two-year-old looking for an entry-level job, or if you live in a region where outsourced customer service is a massive part of the local economy, that "efficiency" is a catastrophe. What industries beyond customer support are currently in the crosshairs as we sit here in early twenty-twenty-six?
Herman
Well, the "Tier One" phenomenon is spreading. It is not just support anymore. It is Tier One anything. Think about junior coding roles. In twenty-twenty-four, the U-K digital sector saw a massive forty-four percent drop in the number of sixteen to twenty-four-year-olds in computer programming roles. We are also seeing a massive displacement in the legal industry. About forty-four percent of general counsel across twelve countries are now using AI for Tier One tasks like contract review and discovery. The "middle" is being hollowed out.
Corn
I think that "hollowing out the middle" is the scariest part. Because if you take away the entry-level jobs, how does anyone ever become an expert? If the AI is doing all the junior-level coding or support, where do the senior engineers and managers of twenty-thirty-five come from? They won't have had those formative years of doing the "grunt work" where you actually learn how things break.
Herman
That is a brilliant point, Corn. We are essentially burning the bottom rungs of the career ladder and then wondering why nobody is reaching the top. It creates this experience gap. And as we move further into twenty-twenty-six, we are seeing the rise of what Daniel mentioned: multimodal and agentic AI. This is where things get really spicy. We now have tools like OpenAI's "Operator," which has been out for a year now and is fully integrated into most workflows.
Corn
Explain that a bit. We have used those terms before, but for anyone who is just tuning in, what does it mean for a job to be "at risk" from an agentic AI versus just a chatbot?
Herman
So, a chatbot talks to you. An agent acts for you. Up until recently, an AI could tell you how to change your flight, but it couldn't necessarily log into the airline's internal legacy database, find a seat, process the payment, and send you the new ticket while also updating your calendar. Agentic AI, like Operator, can do that. It has "agency." It can use tools. It can navigate a computer screen just like a human does.
Corn
So it is moving from "giving advice" to "performing the task."
Herman
Exactly. And multimodal means it isn't just processing text. It can see your screen, it can hear the tone of your voice, it can look at a spreadsheet and understand the visual layout. When you combine those two, you start looking at back-office administrative roles. Paralegals, insurance adjusters, medical billing specialists. These are jobs that require a lot of "looking at things and moving data from point A to point B." Those are incredibly vulnerable right now.
Corn
It feels like we are talking about a significant percentage of the global workforce. So let’s tackle Daniel's question about obligation. Does the AI industry have a responsibility here? Or is this just "creative destruction" in the capitalist sense, like the steam engine or the printing press?
Herman
I think the scale and the speed make this different. The industrial revolution took decades to play out. People had time to adapt, even if it was painful. This is happening in months. I personally believe there is a massive societal obligation, but the question is where it sits. Is it the companies like OpenAI and Google? Or is it the governments that collect taxes from these companies?
Corn
Well, if a company replaces ten thousand workers with a server farm, they are saving a fortune in payroll taxes, healthcare, and salaries. That money doesn't just disappear; it becomes profit. There has been a lot of talk about an "automation tax," right? The idea that if a robot or an AI takes a job, the company still has to pay a portion of that "saved" tax into a fund for worker retraining.
Herman
It is a popular idea, but it is incredibly hard to implement. We saw the debate around the O-B-B-B-A—the One Big Beautiful Bill Act—that passed last year. It touched on tax reform, but it really struggled to define what a "job lost to AI" actually looks like. If a company just doesn't hire for a vacancy because they have more efficient tools, is that a lost job? It is very slippery. But I think we are reaching a point where some kind of Universal Basic Services or a more robust social safety net isn't just a progressive dream, it is a functional necessity for social stability.
Corn
And that brings us to the "game plan" Daniel asked about. If we assume the jobs are going away and they aren't coming back in the same form, what do people actually do? Daniel mentioned the idea of a workforce built around "managers of AI systems." Is that realistic? Can everyone just be a "prompt engineer" or an "AI orchestrator"?
Herman
I am skeptical that it's a one-for-one swap. The World Economic Forum's twenty-twenty-five report predicted that while AI might displace ninety-two million jobs by twenty-thirty, it could create one hundred and seventy million new ones. That sounds great on paper—a net gain of seventy-eight million. But the new roles are things like "Big Data Specialist" and "AI Orchestrator." Managing AI is a much higher-skill task than the jobs being replaced. It requires a deep understanding of the system's limitations, the ability to spot hallucinations, and the strategic vision to know what the AI should be doing in the first place.
Corn
Right, so it doesn't solve the "entry-level" problem. It actually makes the barrier to entry higher. You have to be an expert just to start.
Herman
Exactly. I think the "win-win" scenario that some optimists point to is that AI will lower the cost of everything. If healthcare, legal advice, and education become nearly free because they are powered by AI, then maybe we don't need to work forty hours a week just to survive. But that transition period is going to be incredibly rocky. We are talking about millions of people whose identity and livelihood are tied to these roles.
Corn
I want to go back to the human element for a second. Daniel mentioned how it feels "disrespectful" when a company you pay a lot of money to won't let you talk to a human. There is a psychological cost to this automation too. We are being funneled into these frictionless, soulless interactions. Do you think there will be a "human-made" or "human-serviced" premium in the future? Like how people pay more for organic food or handmade furniture?
Herman
Oh, absolutely. We are already seeing the "Human-in-the-Loop" certification starting to pop up in some industries. The idea that "you will always be able to reach a person within sixty seconds" could become a luxury feature for high-end brands. But for the average consumer, for the person just trying to get their internet fixed or their bank statement clarified, they are going to be stuck with the bots.
Corn
So, what's the advice for someone listening who is in one of these "at-risk" industries? If you are a junior developer or you work in a call center, what is the twenty-twenty-six game plan?
Herman
I think the most important thing is to move "up the stack." You want to focus on the things AI is still bad at. A recent M-I-T report found that ninety-five percent of generative AI integration attempts actually fail to produce a financial return because they lack the human nuance required for complex problem-solving. AI is great at logic, data, and synthesis. It is still relatively poor at high-stakes empathy, complex physical manipulation, and truly novel cross-disciplinary strategy. If your job is "follow these ten steps to solve a problem," you are in trouble. If your job is "navigate this highly emotional, politically sensitive, unique human situation," you have more runway.
Corn
It is about the "uniquely human" traits. But even then, we see these models getting better at "simulating" empathy. They can be programmed to be patient, to use soft language, to never get frustrated. In some ways, they are "better" at customer service than a stressed-out human who has been on the phone for eight hours.
Herman
That is the uncomfortable truth. A bot doesn't have a bad day. It doesn't get annoyed when you ask the same question three times. It has infinite patience. So, the "human" advantage has to be about more than just being "nice." It has to be about accountability and genuine connection. If something goes wrong, a bot can't take responsibility. It can't feel the weight of a mistake.
Corn
Let’s talk about the "Agentic AI" risk in twenty-twenty-six specifically. We are seeing these agents start to handle things like travel planning, personal finance, and even basic project management. If I am an executive assistant or a travel agent, the walls are closing in. What is the next phase of that?
Herman
The next phase is the "Autonomous Enterprise." We are starting to see startups that are basically two founders and a thousand AI agents. They don't hire a marketing team; they deploy a marketing agent. They don't hire an accounting firm; they run an automated financial stack. This is the "scale without mass" phenomenon. It allows for incredible innovation, but it doesn't create jobs in the traditional sense. Watch the S-D-Rs in sales—Sales Development Representatives. That entire entry-level layer is vanishing because agents can research, personalize, and follow up on leads better than a human can.
Corn
This feels like a recipe for massive wealth inequality. The people who own the AI get all the rewards of the productivity, and the people who used to do the work are left behind. Daniel asked if society has an obligation. It feels like if we don't address this through policy, we are looking at a very fractured world.
Herman
It is the defining challenge of our decade. We have to figure out how to decouple "income" from "traditional labor." If the machines are doing the labor, the value they produce has to be distributed in a way that keeps society functioning. Whether that is a "robot tax," a "citizen's dividend," or just massive investment in new types of "human-centric" jobs like elder care, mental health, and community building.
Corn
You know, it is interesting you mention elder care. We always hear that "physical" jobs are safe, but even there, with the progress in robotics we have seen lately, combined with multimodal AI vision, even those "safe" jobs are starting to look a bit more vulnerable in the long term.
Herman
True, but the "human touch" in caregiving is much harder to replace than the "human touch" in a customer service chat. I think there is a hierarchy of automation. The "cognitive-routine" jobs go first. Then the "cognitive-non-routine" like law and medicine. Then the "physical-routine" like factory work. And finally, the "physical-non-routine" like nursing or plumbing.
Corn
So, if you are a plumber, you are probably fine for a while.
Herman
Exactly. If you can fix a leaky pipe in a cramped, unpredictable basement, you are safe for the foreseeable future. Robots still struggle with stairs and wet, slippery environments.
Corn
It is a strange world where the "prestigious" office jobs are more at risk than the trades. We have spent decades telling kids to go to college and get an office job to be "safe," and now that might be the most dangerous place to be.
Herman
It is a total inversion of the twentieth-century career advice. I think we need to be honest with ourselves that "retraining" isn't a magic bullet either. You can't just take a fifty-year-old customer service veteran and tell them to "become a prompt engineer" or "learn to code." It is not just about skills; it is about temperament and the time it takes to gain mastery.
Corn
So, what is the "game plan" then? If retraining isn't the whole answer, and the jobs are going away, what do we actually tell people?
Herman
I think the game plan has to be three-pronged. One: we need to embrace the efficiency to lower the cost of living for everyone. If AI makes food, energy, and housing cheaper, that is a huge win. Two: we need a massive shift in how we fund the social safety net, moving away from payroll taxes and toward capital or automation taxes. And three: we need to re-value "human work" that isn't about productivity. Art, community service, caregiving, education. These things should be highly compensated precisely because they can't be done by a machine.
Corn
That requires a huge cultural shift. We are so used to valuing people based on their "economic output." If a machine can output more than you, does that mean you are worth less? In a purely capitalist sense, yes. In a human sense, no.
Herman
Exactly. We have to move from a "work-centric" society to a "purpose-centric" society. And that is a terrifying transition because we don't have a blueprint for it.
Corn
Let’s look at the "AI Manager" idea one more time. Daniel asked if it is realistic to think we can have a workforce built around that. I am thinking about the "manager" of an AI customer service team. Instead of managing fifty humans, they manage one giant model and a few specialized agents. Their job is to look at the edge cases, the things the AI couldn't solve, and the "disgruntled" customers who demand a human. That sounds like a very stressful, high-intensity job. You only ever deal with the hardest, most annoying problems.
Herman
That is the "Filter Problem." Humans become the filters for everything the AI can't handle. It means your entire workday is spent dealing with the five percent of cases that are absolute nightmares. You lose the "easy wins" that make a job sustainable. It leads to massive burnout. We are already seeing this in content moderation. The AI filters out the easy stuff, and the humans are left looking at the most horrific content on the internet all day.
Corn
That is a grim reality. You are basically taking the "soul-crushing" part of the job and making it the entire job.
Herman
Right. So, when people say "AI will free us from drudgery," we have to ask: what is left? If the "drudgery" was the easy, repetitive stuff that gave your brain a break, and the "new" job is just constant high-stakes crisis management, is that actually an improvement in quality of life?
Corn
I think this is why we are seeing so much pushback. It isn't just about the paycheck; it is about the "vibe" of work. The feeling that you are just a "handler" for a machine rather than a craftsman or a helper.
Herman
And I think that is where the "AI industry obligation" comes in. Companies like OpenAI and Anthropic shouldn't just be focused on making the models "smarter." They should be looking at "human-centric design." How can these tools be built to augment a human's day rather than just replacing the easy parts and leaving the human with the scraps?
Corn
Is anyone actually doing that? Or is the market pressure for "efficiency" just too strong?
Herman
There are some interesting experiments. Some companies are using AI to "pre-fill" information for a human agent, so the human can focus on the conversation rather than the data entry. That feels like augmentation. But as soon as the AI gets good enough to do the conversation too, the temptation to cut the human out entirely is almost impossible for a CEO to resist.
Corn
It always comes back to the bottom line. Daniel’s prompt really touched a nerve because we see it happening every day now. I mean, even in our own lives. I used to spend hours researching certain topics for my work, and now I can get a summary in ten seconds. I am "more productive," but I also feel like I am losing that "deep dive" experience that used to make me an expert.
Herman
We are all becoming "editors" rather than "creators." We edit the AI's code, we edit the AI's writing, we edit the AI's research. It is a different kind of mental labor. It is broader, but shallower.
Corn
So, as we look toward the rest of twenty-twenty-six, what are the specific roles you think are going to be the "canary in the coal mine" for this next wave of agentic AI?
Herman
Watch the S-D-Rs in sales, as I mentioned, but also watch the "Tier One" creative jobs. Stock photography is basically dead. Basic jingle writing for commercials is on life support. If you need a "generic" version of something creative, the AI has you covered. The "Tier One" of creativity is "I need something that looks professional and fits this mood." That is now a solved problem. The "Tier Two" is "I need something that changes the culture." That still needs a human.
Corn
But again, how do you get to "Tier Two" if you can't get a job doing "Tier One"? It is the same ladder problem.
Herman
Exactly. We might end up with a "lost generation" of creatives who never got their start because the "entry-level" work was automated. This is why I think we need to rethink education entirely. We shouldn't be teaching kids to do "Tier One" tasks. We should be starting them at "Tier Two" thinking from day one. But that is a huge ask for our current school systems.
Corn
It feels like we are talking about a total restructuring of how we live, work, and learn. It is a lot to take in. Daniel really went for the jugular with this one.
Herman
He did. And I don't think there are easy answers. But I do think that being aware of it is the first step. We can't pretend it isn't happening. We have to look at these displacements and say, "Okay, this person lost their job. What is the systemic way we support them, and how do we ensure the next generation has a path forward?"
Corn
And I think that "accountability" piece is huge. If you are a company that is laying off thousands of people because of AI, you should be expected to contribute to a transition fund. It shouldn't just be "privatize the gains and socialize the losses."
Herman
That is the phrase. "Socialize the losses." When people lose their jobs, the "loss" is felt by the community, the family, and the government that has to provide support. If the "gain" is only felt by the shareholders, the system eventually breaks.
Corn
Well, on that slightly heavy but very necessary note, I think we have covered a lot of ground. We have looked at the "Tier One" collapse, the rise of agentic and multimodal AI in twenty-twenty-six, the "ladder problem" for new workers, and the potential for a "human-premium" in the future.
Herman
It is a lot to chew on. And I think it is important for our listeners to think about their own "game plan." How are you moving "up the stack"? What are you doing that a machine can't simulate?
Corn
Exactly. And hey, if you have thoughts on this, or if you are someone who has been directly impacted by AI automation, we want to hear from you. You can get in touch through the contact form at myweirdprompts.com. We really value the perspective of our listeners on these "real-world" implications.
Herman
We really do. And if you have been enjoying the show, we would appreciate it if you could leave us a review on your podcast app or on Spotify. It helps us reach more people and keeps the conversation going.
Corn
Definitely. It makes a big difference. Alright, I think that is a wrap for episode three hundred and eighty-nine. Thanks to Daniel for sending this in and for being a great housemate, even if he does make us think about the end of the world on a Tuesday afternoon.
Herman
It is what he does best.
Corn
Thanks for listening to My Weird Prompts. You can find us on Spotify and at myweirdprompts.com. We will be back soon with another prompt.
Herman
Take care of yourselves out there. And maybe go talk to a human today, just for the sake of it.
Corn
Good advice. See you next time.

This episode was generated with AI assistance. Hosts Herman and Corn are AI personalities.

My Weird Prompts