#1560: The Shadow AI Crisis: Professionals in the AI Closet

Why are 69% of lawyers using AI in secret? Explore the "transparency paradox" and the shift toward agentic systems in law and medicine.

0:000:00
Episode Details
Published
Duration
20:47
Audio
Direct link
Pipeline
V5
TTS Engine
chatterbox-regular
LLM

AI-Generated Content: This podcast is created using AI personas. Please verify any important information independently.

The Rise of Shadow AI in Professional Services

As of March 2026, a massive adoption gap has emerged in the professional world. While nearly 70% of legal professionals and over 80% of physicians are utilizing generative AI in their daily workflows, institutional policies have failed to keep pace. This has created a "closeted use" culture where experts rely on AI to manage workloads but hide those tools from clients, patients, and even their own firms. This secrecy is driven by a "transparency paradox": the public demands the accuracy and speed that AI provides, yet often views the use of such tools as a sign of diminished human expertise.

From Stochastic Parrots to Agentic Systems

The primary criticism against professional AI use—the idea that these models are merely "stochastic parrots" guessing the next word—is increasingly becoming a relic of the past. The technology has shifted toward agentic systems powered by Retrieval-Augmented Generation (RAG). Unlike early chatbots that were prone to "hallucinations" or making up facts, modern professional tools are grounded in closed, verified databases like Westlaw or LexisNexis.

These systems no longer just generate text; they execute complex tasks. An agentic AI can review a contract for specific regulatory compliance, flag missing clauses, and draft amendments. This transition means the professional is moving from a "creator" role to a "supervisor" role, managing a highly efficient digital junior associate.

The Liability Reality and the Skadden Memo

A common misconception is that using AI absolves a professional of responsibility if an error occurs. However, recent industry mandates, such as the landmark memorandum from Skadden Arps, have clarified the legal landscape: an algorithm is a utility, not an agent. Much like a calculator, the user remains 100% liable for the output. If a lawyer signs a document containing an error generated by an AI, the responsibility rests solely with the lawyer. This shift treats AI as a sophisticated extension of human intent, which should, in theory, give professionals the confidence to be transparent about their toolsets.

Reclaiming the Human Element

In the medical field, AI is paradoxically making the practice more human. Ambient clinical documentation tools now allow doctors to record patient visits in real-time, drafting medical notes automatically. This removes the "digital wall" of the computer screen, allowing physicians to maintain eye contact and focus entirely on the patient. While there are valid concerns regarding "skill erosion"—the fear that professionals will lose the ability to perform manual tasks—the trade-off is a massive increase in safety and the ability to synthesize the millions of new medical papers published annually that no human could possibly track.

Breaking the Structural Rot

The greatest barrier to AI transparency remains the business model of professional services. The billable hour is the natural enemy of efficiency; if a tool allows a lawyer to complete three hours of work in twenty minutes, the traditional revenue model collapses. To move forward, firms must restructure how they value expertise. Moving AI use out of the shadows is not just about efficiency—it is about creating an auditable, transparent, and safer environment for the next generation of professional work.

Downloads

Episode Audio

Download the full episode as an MP3 file

Download MP3
Transcript (TXT)

Plain text transcript file

Transcript (PDF)

Formatted PDF with styling

Read Full Transcript

Episode #1560: The Shadow AI Crisis: Professionals in the AI Closet

Daniel Daniel's Prompt
Daniel
Custom topic: Since the explosive growth and rise of chnat GPT and other AI tools, their usage has normalized and dramatically increased across the population. However, there is still an unusual sort of taboo to ad
Corn
You know, Herman, I was thinking about this while I was watching you organize your research papers earlier. There is this very specific kind of tension in being a professional these days. We expect our doctors and our lawyers to be these walking encyclopedias of perfect, infallible knowledge, but the second we catch them using a tool to actually achieve that perfection, we act like they are cheating. It is like finding out your favorite chef uses a timer instead of just feeling the heat with their soul. Today is March twenty-sixth, twenty twenty-six, and the prompt from Daniel is about exactly that... this strange, growing shadow A-I crisis in professional services where people are doing the work but hiding the tools.
Herman
It is a fascinating prompt, Corn, and honestly, the timing could not be more relevant. I am Herman Poppleberry, by the way, for anyone joining us for the first time. We are looking at a landscape here in late March of twenty twenty-six where the data is just staggering. We are seeing this massive adoption gap. According to the latest industry reports from just a few weeks ago, sixty-nine percent of legal professionals are using generative A-I for their work, but only thirty-four percent of law firms have actually adopted official, legal-specific tools. That leaves this huge middle ground of what we call closeted use, where people are using general tools under the radar because the institutional policy has not caught up to the individual reality.
Corn
It is the professional version of a teenager hiding a comic book inside a math textbook. I mean, think about the stakes here. If you are a lawyer and you are saving eight to ten hours a week by using these tools... which is the number Doc-Legal dot A-I is reporting right now... that is an entire workday you have reclaimed. But instead of the firm celebrating that efficiency, the lawyer feels like they have to pretend they spent those ten hours grinding through manual document review. Why are we so obsessed with the struggle? Why is the labor more important than the result?
Herman
There is a deep-seated cultural lag at play. We have been conditioned to equate professional value with billable hours and visible effort. When you remove the friction, people feel like the value has been removed too. But what is really driving the secrecy right now is not just laziness or a lack of accountability, it is the fear of being labeled a stochastic parrot. That term has become such a weaponized slur in professional circles. Critics use it to imply that the A-I is just guessing the next word without any understanding of the law or medicine. But that perspective ignores the massive technical shift we have seen over the last year toward agentic systems and highly specialized retrieval-augmented generation.
Corn
Right, because we are not just talking about someone asking a chatbot to write a closing argument anymore. We are talking about professionals who are essentially acting as supervisors for a very fast, very thorough junior associate. But here is the paradox Daniel pointed out in his prompt that I want to poke at. We demand that a doctor knows every possible drug interaction for a rare condition, yet if that doctor uses an A-I tool to double-check that interaction, some patients feel like the doctor is less competent. It is this weird idea that human memory is the only valid form of expertise.
Herman
That is the core of the transparency paradox. We want the outcome... the safety and the accuracy... but we have this romanticized attachment to the method. The reality of twenty twenty-six is that the American Medical Association updated its policy just two weeks ago, on March thirteenth. They are now officially pushing for the term augmented intelligence instead of artificial intelligence. They want to frame it as a partner. And the reason they are doing that is because eighty-one percent of physicians are now using A-I in their practice. That is double the rate from twenty twenty-three. You cannot have four-fifths of your workforce using a tool in secret without creating a massive liability nightmare.
Corn
And that liability part is where it gets spicy. I saw that memorandum from Skadden Arps that came out yesterday, March twenty-fifth. They basically laid down the law... pun intended... saying that if a lawyer says the algorithm did it, it carries the same legal weight as saying the calculator did it. You are still one hundred percent liable for the output. It is such a cold, clear way of saying... look, use the tool, but do not think for a second it absolves you of your professional duty.
Herman
The Skadden memo is a landmark moment because it treats A-I as a utility rather than an agent. If you use a calculator and you type in five plus five and it tells you twelve because the battery is low, and you put that twelve on a tax return, the I-R-S does not care about the calculator. They care about your signature on the bottom of the page. This shift is actually what should be giving professionals the confidence to come out of the closet with their A-I use. If the liability remains with the human, then the tool is just a sophisticated extension of that human's intent.
Corn
But does that actually solve the stigma? Because if I am paying a lawyer five hundred dollars an hour and I find out they used a tool to do three hours of work in twenty minutes, I am going to want to know why I am still paying for the other two hours and forty minutes. Is that why the usage is staying in the shadow? It is not just about the quality of the work, it is about the business model of professional services being built on the idea that things should take a long time.
Herman
You are touching on the structural rot that A-I is exposing. The billable hour is the natural enemy of efficiency. When you have tools like Harvey A-I or Co-Counsel... which are these highly guardrailed, legal-specific platforms... they are grounding their responses in actual case law and verified statutes. They have virtually eliminated the hallucinations that we saw back in the Mata versus Avianca days of twenty twenty-three. But if a law firm adopts these, they have to fundamentally change how they charge clients. It is easier for a firm to have no formal policy and let their associates use A-I in secret to keep their heads above water while still billing the client for the full human-speed duration.
Corn
Let us talk about that Mata versus Avianca thing for a second, because that is the ghost that still haunts every courtroom. For those who do not remember, that was the case where a lawyer used an early version of a chatbot to write a brief, and the chatbot just made up entire cases and citations. It was a disaster. But Herman, you are saying the technology today, in twenty twenty-six, is fundamentally different. How?
Herman
It is all about R-A-G, or Retrieval-Augmented Generation. In twenty twenty-three, the models were just predicting the next likely word based on their training data. They were essentially dreaming. Today, professional tools use R-A-G to anchor the A-I in a specific, closed database of verified facts. When a lawyer uses Co-Counsel, the A-I is not allowed to just reach into the void of the internet. It is forced to look at the actual Westlaw or LexisNexis database, find the real case, and then summarize it. It is like the difference between asking a person to tell you a story from memory and asking them to read a specific page of a book out loud. The hallucination risk has been crushed by these guardrails.
Corn
So the "stochastic parrot" argument is basically a dinosaur at this point. We are moving toward agentic A-I. Can you explain that shift? Because I think that is where the "junior team member" analogy really starts to make sense.
Herman
A generative A-I just makes content. An agentic A-I executes tasks. It can plan, it can use tools, and it can reason through multi-step processes. If you tell an agentic legal A-I to "review this contract for H-I-P-A-A compliance," it does not just write a paragraph about H-I-P-A-A. It opens the document, identifies the parties, cross-references the specific regulatory requirements, flags the missing clauses, and drafts the necessary amendments. It is performing a role, not just generating text. This is why the "lazy" label is so frustrating. Managing an agentic system requires a high level of professional oversight. You have to know exactly what to ask, how to verify the steps, and how to integrate the output. It is a new kind of hard work.
Corn
It reminds me of our discussion in episode thirteen-oh-eight about the A-I attribution paradox. We are so obsessed with knowing exactly which percentage of a thought came from a human and which came from a machine. But in a professional context, that obsession is actually dangerous. If we force people to hide their tools, we lose the ability to audit those tools. We are creating a "Shadow A-I" crisis where the most important work is happening in the dark.
Herman
And that brings us to the medical side of things, which is where the stakes are literally life and death. Daniel’s prompt mentioned that a doctor using A-I is actually a sign of professional humility. I think that is such a powerful frame. We have this image of the doctor as the lone genius who knows everything. But the reality of March twenty twenty-six is that the sheer volume of medical information is now beyond human capacity. There are over a million new medical papers published every year. No human doctor, no matter how brilliant, can stay current on all of that.
Corn
So if a doctor refuses to use A-I, are they actually being arrogant? Are they saying, "My brain is better than the collective knowledge of the entire medical community synthesized by an algorithm"?
Herman
In a way, yes. And the patients are starting to feel it. While sixty-three percent of workers in a recent survey said A-I makes the workplace feel less human, seventy-five percent of physicians say it actually improves patient care. There is this beautiful irony here with tools like Suki... spelled S-U-K-I... and Ambience Healthcare. These are ambient clinical documentation tools. They sit in the room during a patient visit, they listen to the conversation, and they draft the medical notes in real-time.
Corn
Wait, so instead of the doctor staring at a computer screen and typing while the patient talks about their chest pain, the doctor can actually look the patient in the eye?
Herman
Precisely. The technology that people fear is making the workplace less human is actually the only thing that can give the doctor the time to be human again. It removes the digital wall between the provider and the patient. But despite that, we still have eighty-eight percent of physicians expressing concern over skill erosion. They worry that if they stop writing the notes and stop doing the manual search for drug interactions, they will lose their clinical judgment. It is the same fear people had about G-P-S making us unable to read maps. And the truth is, we did lose the ability to read maps, but we gained the ability to never get lost.
Corn
I would take never getting lost over being a map-reading expert any day, especially when it comes to a diagnosis. But let us talk about the "Skill Erosion Myth." Is it a real threat, or is it just a transition to a different kind of skill?
Herman
It is a transition. Look at a tool like Spellbook. It is a Word-native A-I that healthcare lawyers use to flag H-I-P-A-A or G-D-P-R risks directly within contracts. In the old days, you would have to have a human expert with twenty years of experience spot a tiny phrasing error that creates a regulatory risk. Now, the A-I flags it instantly. Does the lawyer lose the skill of spotting it? Maybe. But the client gets a contract that is ten times safer. We are moving from a world where we value the process of finding the mistake to a world where we value the absence of the mistake. The new skill is not "spotting the error," it is "validating the A-I's catch" and understanding the strategic implications of that risk.
Corn
It feels like we are in this awkward teenage phase of technology. We have the capability, but we do not have the social grace to talk about it openly. And the government is starting to step in, which always makes things more complicated. You mentioned those new disclosure bills in Washington and Virginia... H-B eleven-seventy and H-B five-eighty. What do those actually require? Because if I am a closeted A-I user and now the law says I have to tell my clients, that closet door is getting kicked wide open.
Herman
Those bills are a direct response to the transparency coalition's lobbying. They basically require professionals to disclose when content is A-I-generated or significantly modified by A-I. It is an attempt to bring the shadow A-I out into the light. But the problem is that the definition of "significantly modified" is incredibly blurry. If I use A-I to brainstorm an outline but I write every word myself, do I have to disclose that? If I use it to proofread for grammar, is that a disclosure event? The legislation is trying to solve a trust problem, but it might just end up creating a mountain of red tape that makes professionals even more hesitant to be honest about their workflows.
Corn
It is the "Curse of Competence" again, Herman. We talked about this in episode twelve-forty-nine. When you are so good at something that it looks easy, people start to devalue it. If a lawyer uses A-I to draft a complex merger agreement in an hour, and it is perfect, the client might feel ripped off because they did not see the "struggle." So the lawyer hides the tool to maintain the illusion of the grueling labor. We have to break that cycle. We have to start valuing the expertise it takes to guide the A-I, rather than the manual labor the A-I replaces.
Herman
That requires a massive shift in institutional responsibility. Right now, only eighteen percent of professional organizations are even tracking the return on investment for these tools. They are letting their employees navigate this ethical minefield alone. Firms need to move from "banning" to "governance." Instead of saying "Do not use A-I," they should be saying, "Here is the professional-grade, secure, grounded version of the tool. Use this one, and here is how we want you to document it."
Corn
So, if I am a junior associate or a resident doctor listening to this, and I am currently a "closeted" user, how do I advocate for this? How do I move from shadow A-I to sanctioned A-I?
Herman
You have to frame it in terms of risk and liability. If you are using a non-sanctioned tool on your personal phone to check a diagnosis or draft a brief, you are creating a massive data privacy risk. You are probably leaking patient or client data into a general model. The argument to the higher-ups should be: "I am using these tools because they make me more accurate and efficient, but I want to use a version that is secure and firm-approved so we can protect our data and our clients." You make it about professional duty, not just personal productivity.
Corn
I love that. It turns the conversation from "I am being lazy" to "I am being responsible." And that leads us to the future of professional identity. Herman, do you think we will ever reach a point where failing to use A-I is considered malpractice?
Herman
We are already seeing the first ripples of that. In legal circles, there are serious discussions about whether the "duty of competence" now includes technological competence. If your opponent uses A-I to find a "needle-in-a-haystack" piece of evidence that wins the case, and you missed it because you were manually reading boxes of paper, did you provide effective counsel? Eventually, the standard of care in medicine and law will be defined by what a reasonably competent professional using the best available tools would do. If those tools include A-I that reduces diagnostic errors by forty percent, then choosing not to use them is a choice to be less effective.
Corn
It is a total rebranding of what it means to be an expert. In the twenty-first century, an expert is no longer a person who knows all the answers. An expert is a person who knows how to ask the right questions and how to verify the answers they receive. It is a shift from being a library to being a librarian. And that is a hard transition for people whose entire identity is built on being the library.
Herman
It is a messy transition, but a necessary one. The "Shadow A-I" crisis is not that people are using A-I... it is that they are using it without a safety net. The American Medical Association is trying to lead the way by emphasizing "Augmented Intelligence," framing it as a safeguard against human error rather than a replacement for human care. If we can get there, then the doctor using A-I is not a sign of weakness, but a sign of the highest professional standards.
Corn
I think about Daniel's son, Ezra, growing up in this world. By the time he is looking at careers in the twenty-forties, the idea of a closeted A-I user will probably seem as absurd as a closeted spell-check user seems to us now. We will look back at twenty twenty-six and wonder why we were so conflicted about using tools that clearly made us better at our jobs.
Herman
The economic pressure will eventually override the cultural stigma. When firms realize they are losing out on eight to ten hours of productivity per person per week by not having an official policy, they will be forced to adapt. But I hope we find a way to make it about more than just the bottom line. I hope we make it about that professional humility we talked about. The idea that being a great doctor or a great lawyer is about being a human who cares enough to use every tool at their disposal to get it right.
Corn
That is the most hopeful version of this story. A-I as a safeguard, not a replacement. If you are a doctor or a lawyer listening to this and you have been using A-I in the shadows, maybe it is time to start that conversation at your firm or hospital. The more we talk about it, the less power the taboo has.
Herman
The transparency paradox only exists as long as we stay silent about the tools we are using to achieve the results everyone wants. The individuals have already moved on. Eighty-one percent of doctors is a massive number. You cannot keep that many people in the shadow for long. Eventually, the light is going to break through.
Corn
Well, I for one am glad I have you to help me understand all these memos, Herman. Even if you do insist on using your full name in the intro like a fancy lawyer.
Herman
Habit of the trade, Corn. Habit of the trade.
Corn
This has been a really deep one. I think we have given people a lot to chew on regarding the future of professional identity. Before we wrap up, I want to give a huge thanks to our producer, Hilbert Flumingtop, for keeping the gears turning behind the scenes. And a big thanks to Modal for providing the G-P-U credits that power the generation of this show. We literally could not do this without the very technology we spend all our time discussing.
Herman
It is a meta-existence, Corn. We are living the prompt.
Corn
We really are. If you are enjoying the show and want to make sure you never miss an episode, search for My Weird Prompts on Telegram to get notified the second a new episode drops. We are also on Spotify and Apple Podcasts, and you can find our full archive of over fifteen hundred episodes at myweirdprompts dot com.
Herman
If you have a prompt of your own, send it over to show at myweirdprompts dot com. We love digging into these complex intersections of technology and culture.
Corn
This has been My Weird Prompts. We will see you next time, hopefully in the light, not the shadows.
Herman
Goodbye, everyone.
Corn
See ya.

This episode was generated with AI assistance. Hosts Herman and Corn are AI personalities.