#1822: Quantum in the Cloud: Hype vs. Hardware

Is QCaaS a billion-dollar breakthrough or an expensive science experiment? We explore the gap between hype and hardware.

0:000:00
Episode Details
Episode ID
MWP-1976
Published
Duration
24:10
Audio
Direct link
Pipeline
V5
TTS Engine
chatterbox-regular
Script Writing Agent
Gemini 3 Flash

AI-Generated Content: This podcast is created using AI personas. Please verify any important information independently.

The Quantum Cloud Reality Check

The idea of adding a quantum computer to your cloud infrastructure shopping cart sounds like science fiction, but it is now a tangible reality. As of early 2026, the market for Quantum Computing as a Service (QCaaS) has consolidated into a $1.2 billion industry. However, beneath the headline revenue figure lies a complex and expensive landscape where the gap between theoretical potential and practical utility remains vast.

The Four Giants and Their Philosophies

The market is dominated by four major cloud providers, each with a distinct approach to quantum hardware and access.

Amazon Web Services (AWS) acts primarily as a broker. Through AWS Braket, users gain a unified interface to access diverse hardware architectures without AWS needing to manufacture the chips themselves. This includes trapped-ion systems from IonQ, superconducting qubits from Rigetti, and quantum annealers from D-Wave. The pricing model is granular and potentially expensive: a base task fee is charged, followed by a cost "per shot." A shot represents a single run of a quantum algorithm. Because quantum computers are probabilistic, meaningful results require running thousands of shots to build a probability distribution. This can quickly escalate costs, turning a seemingly cheap thirty-cent task into a three-hundred-dollar bill for seconds of compute.

In contrast, IBM operates with vertical integration. They design and build their own chips, dilution refrigerators, and software stacks. While they offer a free tier for researchers to build mindshare, their premium model involves subscription access to systems ranging from 16-qubit machines to the 127-qubit Eagle processor. This requires significant enterprise commitments, positioning IBM as a long-term partner rather than a utility provider.

Microsoft’s Azure Quantum takes a software-first, hardware-agnostic approach. Lacking a dominant in-house superconducting chip, Microsoft partners with diverse hardware providers like Quantinuum and IonQ, which utilize neutral-atom architectures. Their strategy is deep integration; they aim to make quantum processors feel like just another co-processor alongside GPUs and NPUs within the existing Azure ecosystem. This "path of least resistance" lowers the barrier for current Azure customers to experiment with quantum workflows.

Google Cloud, despite its early claims of "quantum supremacy" in 2019, remains more guarded and selective. Access to their Sycamore processor is less self-service than Braket, leaning toward high-touch enterprise contracts with minimum commitments. They are targeting research partnerships with large logistics firms or national labs rather than individual hobbyists.

The Pilot Phase Bottleneck

Despite the revenue, the industry faces a significant adoption hurdle: approximately 78% of enterprise users are stuck in the pilot phase. Many have been experimenting for 18 months or more without moving a single production workload to a quantum processor.

This stagnation stems from the "Noisy Intermediate-Scale Quantum" (NISQ) era. Current systems are error-prone, and the overhead required for error correction is immense. Estimates suggest that creating a single reliable "logical" qubit may require up to 1,000 physical qubits. With current cloud machines topping out in the hundreds of qubits, reliable, large-scale computation is still years away.

Consequently, for many practical enterprise problems, a high-end cluster of classical GPUs—like NVIDIA’s H100—can simulate quantum circuits faster and cheaper than actual quantum hardware. This creates a "dirty secret" in the industry: why run a calculation on a noisy quantum machine when a classical simulation is more efficient?

The Economic and Strategic Driver

If the hardware isn't production-ready, why are companies spending thousands of dollars monthly on subscriptions? The answer often lies in talent retention and future-proofing. In sectors like pharmaceuticals, companies are investing in QCaaS to build algorithmic muscle for future molecular simulations. They are paying a premium to keep researchers engaged and ensure they don't migrate to competitors. It is a massive insurance policy against being disrupted when the hardware eventually matures.

This "Quantum Winter" concern—fear that funding will dry up without a killer app—is pushing providers toward a "Hybrid Quantum" model. This approach offloads only the most computationally intensive parts of a problem to the quantum processor while keeping the majority of the workload on classical systems. For instance, in drug discovery, a classical AI model might narrow down thousands of potential molecules to a handful, which are then sent to a quantum processor for high-fidelity electron bond simulation. This minimizes the number of expensive "shots" required.

The Distinction in Hardware

A critical distinction exists between gate-based quantum computing (used by IBM, Google, and Rigetti) and quantum annealing (pioneered by D-Wave). Gate-based systems are general-purpose but error-prone, aiming to solve complex algorithms like Shor’s algorithm. Annealers are specialized tools designed for optimization problems, such as finding the most efficient route for a fleet of delivery trucks. While annealers offer a clearer path to near-term value for specific logistics tasks, they lack the broad applicability of gate-based systems.

The Bottom Line

The QCaaS market is currently a "picks and shovels" business. Cloud providers are subsidizing the birth of an industry, absorbing massive R&D costs for dilution refrigeration and chip fabrication. They are betting that owning the first useful quantum cloud will define the next fifty years of high-performance computing. For now, the value proposition is less about immediate ROI and more about strategic positioning. The industry is in a vacuum tube era—bulky, failure-prone, and expensive—but it is proving the concept. The transition from physical to logical, error-corrected qubits remains the holy grail, and the race to get there is just getting started.

Downloads

Episode Audio

Download the full episode as an MP3 file

Download MP3
Transcript (TXT)

Plain text transcript file

Transcript (PDF)

Formatted PDF with styling

#1822: Quantum in the Cloud: Hype vs. Hardware

Corn
You know, Herman, I was looking at some cloud infrastructure reports this morning, and it feels like we’ve reached this bizarre milestone where you can literally add a quantum computer to your shopping cart right next to a standard Linux instance. Today's prompt from Daniel is about exactly that—the state of Quantum Computing as a Service, or QCaaS, in early twenty twenty-six. It’s one of those topics that sounds like it belongs in a sci-fi novel, but if you look at the consolidated revenue for Q-one, the market actually hit one point two billion dollars.
Herman
It is a fascinating inflection point, Corn. Herman Poppleberry here, by the way, for those keeping track. That one point two billion dollar figure is massive, but it carries a very specific caveat. While the money is flowing in, about seventy-eight percent of enterprise users are still stuck in the pilot phase. They’ve been "experimenting" for eighteen months or more without moving a single production workload onto a quantum processor. It is the ultimate "look but don't touch" technology of our era.
Corn
It’s the expensive hobby of the Fortune five hundred. By the way, fun fact for the listeners—Google Gemini three Flash is actually writing our script today, which is fitting since we’re talking about the bleeding edge of compute. But back to the quantum side of things—if I’m a CTO and I see "Quantum" on my AWS bill, what am I actually paying for? Is it a gimmick, or is there a real machine spinning up somewhere in a basement?
Herman
Oh, the machines are very real, but they aren’t in your local data center. When you use something like AWS Braket, which is Amazon’s play here, you’re basically using a broker service. Amazon doesn’t necessarily want to build every type of quantum hardware themselves yet. Instead, Braket gives you a unified interface to talk to IonQ’s trapped-ion systems, Rigetti’s superconducting qubits, and even D-Wave’s quantum annealers. The pricing is where it gets gritty. You’re looking at a base task fee of about thirty cents, but then they charge you "per shot."
Corn
Per shot. That sounds like a bar tab for physicists. What exactly constitutes a "shot" in quantum terms?
Herman
Think of a shot as a single run of your quantum algorithm. Because quantum computers are inherently probabilistic, you can’t just run the calculation once and trust the result. You have to run it hundreds or thousands of times to get a probability distribution that actually means something. On Braket, those shots can cost anywhere from zero point zero zero zero nine dollars to three cents each, depending on the hardware. If you’re running a complex simulation that requires ten thousand shots, that thirty-cent task suddenly becomes a three hundred dollar bill for a few seconds of compute.
Corn
And that’s for a result that might just tell you "maybe." It seems like a steep price for uncertainty. But then you have IBM, who has always felt like the elder statesman of this space. They’ve been putting their hardware on the cloud since twenty sixteen. Are they still leading the pack, or has the "as-a-service" model flattened the playing field?
Herman
IBM is playing a very different game. They are vertically integrated. They build the chips, the dilution refrigerators, the software stack—the whole thing. They offer a free tier for researchers, which is a brilliant move for mindshare, but their premium access is where the business lives. For about a thousand dollars a month, you get access to their sixteen-qubit systems. If you want the big guns, like the one hundred twenty-seven-qubit Eagle processor, you usually have to be part of the IBM Quantum Network, which involves much larger enterprise commitments.
Corn
I’ve always found the IBM approach interesting because they seem more focused on building a "Quantum Economy" than just selling API calls. But then you have Microsoft with Azure Quantum. They don't have their own high-qubit superconducting chip in the same way IBM does, so what's their angle?
Herman
Microsoft is betting on diversity and the software layer. They’ve partnered with companies like Quantinuum and IonQ. Quantinuum uses neutral-atom systems, which is a totally different physical architecture than what IBM uses. Azure’s big sell is integration. If you’re already an Azure shop, your quantum jobs sit right alongside your classical C-sharp or Python code in the same resource groups. They are trying to make the quantum processor feel like just another co-processor, like a GPU or an NPU.
Corn
It’s the "path of least resistance" strategy. If I don't have to leave my existing cloud environment, I'm more likely to tick the "Quantum" box. But we haven't mentioned Google Cloud yet. Given that Google claimed "quantum supremacy" back in twenty nineteen, you’d think they’d be the dominant force in QCaaS. Why does it feel like they’re more guarded?
Herman
Google is much more selective. While they do offer access to their Sycamore processor through Google Cloud, it’s not as "self-service" as AWS Braket. They lean heavily into enterprise contracts with minimum commitments. They aren't really looking for the hobbyist who wants to spend fifty bucks; they want the research partnership with a global logistics firm or a national lab. It’s a high-touch, high-cost model.
Corn
So we have these four giants, all with different philosophies. But the number you mentioned earlier—seventy-eight percent of users still in the pilot phase—that’s the real story, isn't it? It reminds me of the vector database craze we saw a while back. Everyone rushes in because the tech is cool, and then they realize they don't actually have a problem that requires that specific solution. Are we seeing a "Quantum Hangover" starting to set in?
Herman
I think that’s a very fair comparison, Corn. In fact, we’ve seen this pattern with almost every major compute shift. The difference with quantum is that the barrier to "production" isn't just software maturity—it’s physics. Most of the QCaaS systems available today are what we call NISQ devices—Noisy Intermediate-Scale Quantum. They are full of errors. For most practical enterprise problems, by the time you account for error correction and the overhead of moving data back and forth from classical to quantum, a high-end cluster of H-one hundred GPUs will still beat the quantum machine every single time.
Corn
That’s the dirty secret of the industry, isn't it? If I can simulate your quantum circuit on a standard NVIDIA chip faster and cheaper than you can run it on a real IonQ machine, why would I ever move out of the pilot phase?
Herman
You wouldn't, unless you were trying to "future-proof" your IP. And that’s exactly what the twenty-five percent of QCaaS users in the pharmaceutical industry are doing. They know that eventually, molecular simulation will require quantum gates. If they don't start building the algorithmic muscle now, they’ll be ten years behind when the hardware finally catches up. But for a bank trying to optimize a portfolio? The marginal gain just isn't there yet.
Corn
Let's talk about that financial sector. Whenever I read about quantum, it’s always "it will break encryption" or "it will solve traveling salesman problems for banks." But if I’m JPMorgan Chase—and I know they’ve done a lot of work with IBM—are they actually seeing any ROI, or is this just a massive insurance policy against being disrupted?
Herman
It’s mostly the insurance policy. JPMorgan has published some incredible research on using quantum algorithms for option pricing and risk analysis. But when you look at the benchmarks, the "speedup" they’re seeing is often theoretical or limited to very small, toy-sized datasets. If you try to scale that to a real-world global market feed, the decoherence of the qubits kills the calculation before it finishes. They are spending millions on these cloud subscriptions basically to keep their researchers from moving to a competitor.
Corn
So it’s a talent retention strategy disguised as a compute strategy. That’s a very expensive way to keep your physicists happy. I mean, if these companies are spending five thousand to fifty thousand dollars a month on subscriptions, and they aren't getting production-ready results, at what point does the CFO step in and say, "Hey, maybe we should just buy more GPUs instead?"
Herman
That conversation is happening right now in boardrooms across the world. It’s the "Quantum Winter" concern. We’ve seen billions of dollars in venture capital and cloud credits poured into this. If we don’t see a "killer app"—a production-ready use case that isn't just a research paper—the funding might dry up. The providers know this, which is why they’re pivoting their marketing. You’ll notice that AWS and Azure are starting to talk more about "Hybrid Quantum."
Corn
Hybrid Quantum. That sounds like the "Hybrid Cloud" marketing from ten years ago. What does that actually look like in practice?
Herman
It’s about offloading only the most difficult parts of a problem to the quantum processor while keeping ninety-nine percent of the workload on classical CPUs and GPUs. For example, in drug discovery, you might use a classical AI model to narrow down ten thousand potential molecules to ten. Then, and only then, do you send those ten candidates to a quantum processor for a high-fidelity simulation of the electron bonds. It’s a way to justify the cost by minimizing the "shots" you have to pay for.
Corn
That actually makes a lot of sense. It’s the "Quantum Co-processor" model. But even then, aren't the error rates still a massive hurdle? I was reading about the overhead for error correction, and some estimates say you need a thousand physical qubits just to create one "logical" qubit that actually works reliably. If the biggest machines on the cloud right now are in the hundreds of qubits, we aren't even at the starting line for reliable computation, are we?
Herman
We are in the "vacuum tube" era of quantum. If you remember the early ENIAC computers, they were huge, prone to failure, and had very little memory, but they proved the concept. The jump from physical qubits to logical, error-corrected qubits is the "Holy Grail." Some companies, like Microsoft and Quantinuum, claimed a major breakthrough in late twenty twenty-four regarding logical qubit creation, but scaling that to a system that can actually run a useful Shor’s algorithm or a complex protein fold is still years away.
Corn
Which brings us back to the "as-a-service" aspect. In a way, the cloud providers are the only ones definitely making money here. They’re selling the picks and shovels for a gold mine that might not actually have any gold in it yet. It’s a brilliant business model if you can get people to keep paying the subscription.
Herman
It is, but they are also shouldering the massive R-and-D costs. Building a dilution refrigerator that can keep a chip at ten milli-Kelvin—which is colder than outer space—is not cheap. AWS and Google are basically subsidizing the birth of an industry. They are betting that whoever owns the first "useful" quantum cloud will effectively own the next fifty years of high-performance computing.
Corn
It’s a land grab in a dimension we can’t even see. But let’s look at the smaller players. You mentioned D-Wave earlier. They are famous for "Quantum Annealing," which is different from the "Gate-Based" quantum computing that IBM and Google do. For the non-physicists among us—which is most of us—why does that distinction matter for a business?
Herman
It’s the difference between a specialized tool and a general-purpose computer. An annealer, like D-Wave's, is designed specifically for optimization problems. Think of it like a landscape of mountains and valleys; the annealer helps you find the lowest point, the "global minimum," very quickly. It’s great for logistics, like "How do I route a thousand delivery trucks?" Gate-based systems, like IBM’s, are Turing-complete. They can, in theory, perform any calculation. The catch is that gate-based systems are much harder to build and scale. D-Wave has thousands of qubits already, but they can only do that one specific type of math.
Corn
So if I’m a logistics company, D-Wave on AWS Braket might actually be useful to me today? Or is it still just a "maybe"?
Herman
It’s the closest thing we have to "useful" today. There are companies like Volkswagen and SavantX that have used D-Wave for real-world traffic flow and port management tests. But even there, the "quantum advantage"—the point where the quantum machine beats the best classical algorithm—is often measured in seconds or minutes, not orders of magnitude. And when you factor in the thirty-cent task fee and the cost of the developers who know how to write "Quadratic Unconstrained Binary Optimization" code, the ROI is usually negative.
Corn
I love that acronym, QUBO. It sounds like a character from a space opera. But this points to a bigger issue: the talent gap. If I want to use QCaaS, I can’t just hire a standard Python dev. I need someone who understands linear algebra, complex numbers, and the nuances of quantum gates. Is there any effort by the cloud providers to abstract that away?
Herman
That’s the "Software" part of QCaaS. Every provider has their own stack. IBM has Qiskit, Google has Cirq, and Amazon has the Braket S-D-K. They are all trying to make quantum programming look as much like standard Python as possible. You define a circuit, you add some gates, and you hit "run." But the abstraction is leaky. If you don't understand the underlying physics, you’ll write a circuit that decoheres instantly, and you’ll just get noise back. You’re essentially paying thirty dollars for a random number generator.
Corn
It’s like trying to write a high-end video game engine without knowing how memory management works. You can do it, but it’s going to run like garbage. It seems like the real "Service" in Quantum Computing as a Service right now isn't the hardware—it's the consulting.
Herman
You hit the nail on the head. Most of these QCaaS contracts come with a "Professional Services" rider. You aren't just buying time on a Rigetti chip; you’re buying forty hours of time with a Ph.D. from Amazon who will help you translate your business problem into a quantum circuit. That’s where the real "revenue" in that one point two billion dollar market is coming from. It’s high-end consulting disguised as cloud compute.
Corn
That explains why the "IT and Telecom" and "Academia" sectors make up such a huge chunk of the pie. They have the people who can actually speak the language. But what about the "Government and Defense" sector? You’d think they’d be the ones throwing the most money at this, especially with the geopolitical implications of quantum decryption.
Herman
Oh, they are, but that’s often happening on "Private Cloud" versions of these services. IBM, for example, will build a dedicated quantum data center for a government if the check is big enough. They just opened one in Germany and another in Japan. The public QCaaS we’re talking about is the tip of the iceberg. The real heavy lifting is happening in classified or highly restricted environments.
Corn
Which makes sense. You don't want your code that breaks RSA encryption running on a multi-tenant public cloud where a sloth and a donkey might stumble across it. But let's bring it back to the enterprise reality. If I'm a mid-sized company—say, a regional bank or a manufacturing firm—is there any reason for me to even look at the AWS Braket console right now?
Herman
Honestly? Probably not for production. But there is a very strong argument for "Quantum Readiness." If you wait until quantum advantage is a proven reality, the talent will already be locked up by the Googles and the Goldman Sachs of the world. Using the free tiers on IBM or the low-cost simulators on Braket is a great way to let your best developers play with the concepts. It’s like learning to code for the web in nineteen ninety-two. You won't make money today, but you’ll understand the architecture of the future.
Corn
I like that. It’s "intellectual R-and-D." But let's talk about the simulators for a second. You can run "quantum" code on a standard classical computer using a simulator. At what point does the simulator stop being enough? Because if I can simulate thirty qubits on my laptop, I don't need to pay Amazon thirty cents a task to run it on a "real" machine.
Herman
The limit is usually around forty to fifty qubits. Because the complexity of simulating a quantum system grows exponentially with each qubit, you quickly run out of RAM. To simulate a fifty-qubit system perfectly, you need petabytes of memory. That’s where the cloud providers step in again. They offer "High-Performance Simulators" that run on massive clusters of classical RAM and GPUs. AWS has SV-one, which is their state-vector simulator. It’s often more useful for developers than the actual quantum hardware because it’s "noise-free." You can see what your algorithm is supposed to do before you try to run it on a noisy physical chip.
Corn
So the "Quantum Cloud" is actually mostly a "Classical Cloud pretending to be a Quantum Cloud" for ninety percent of users?
Herman
Precisely. And that’s a good thing! It allows for rapid iteration. But there is a psychological shift that happens when you finally send that job to a real dilution refrigerator. You start thinking about things like "gate fidelity" and "T-one relaxation times." It forces a level of discipline that classical coding just doesn't require.
Corn
It sounds like the ultimate "measure twice, cut once" environment. But let's look at the pricing again. You mentioned the "per shot" cost. Does that price ever go down? Usually, in cloud compute, as scale goes up, the price per unit drops. Are we seeing a "Moore’s Law" for quantum pricing?
Herman
Not yet. In fact, if anything, the costs are staying flat or even slightly increasing for the high-end hardware because the demand from research institutions is so high. The providers have no incentive to lower prices when the queue to use a trapped-ion system is three days long. We are in a supply-constrained market. There are only a handful of these machines on earth that are stable enough to be "cloud-ready." Until someone figures out how to mass-produce reliable qubits, QCaaS will remain a premium luxury.
Corn
It’s the "Birkin bag" of compute. Hard to get, ridiculously expensive, and mostly used for showing off at parties. But there is a real-world application we haven't touched on: Cryptography. Not breaking it, but using quantum systems to create it. Quantum Key Distribution, or Q-K-D. Is that part of the QCaaS offering?
Herman
It’s starting to be. Azure Quantum has been very vocal about "Quantum-Safe" networking. The idea is that even if a quantum computer can't break your encryption today, it might be able to in five years. Someone could steal your encrypted data today, store it, and wait for the hardware to catch up—"Harvest Now, Decrypt Later." QCaaS providers are starting to offer post-quantum cryptographic algorithms as a service to prevent that. It’s an upsell on your standard security package.
Corn
That’s a brilliant fear-based marketing play. "Buy our quantum-safe cloud today so you don't get hacked in twenty thirty!" It’s the ultimate long-term recurring revenue. But what about the "Vector DB Hangover" we mentioned earlier? We saw a lot of companies over-invest in vector databases during the early AI hype, only to realize that for many use cases, a standard SQL database with some clever indexing was enough. Do you think we’ll see a "Quantum Pivot" where companies realize they just needed better classical optimization?
Herman
We are already seeing it. There is a whole field called "Quantum-Inspired Algorithms." These are classical algorithms that use the mathematical tricks discovered by quantum researchers but run on standard CPUs. In many cases, a company will start a "Quantum Project" on Azure or AWS, hire some smart people, and those people will end up writing a "Quantum-Inspired" classical algorithm that solves the problem for one percent of the cost. The company still claims a "win," even if they never actually used a qubit in production.
Corn
That is peak corporate maneuver. "We used quantum-inspired logic to save five percent on our shipping routes." It sounds great in an annual report, and you don't have to explain to the shareholders why you spent fifty thousand dollars on a machine that kept crashing because someone walked past the refrigerator too fast.
Herman
But let's be fair—the research that leads to those inspired algorithms wouldn't happen without the QCaaS platforms. They provide the playground. Without IBM Quantum Experience, an entire generation of researchers wouldn't have had a place to test these theories. So even if the "Service" part of QCaaS is mostly research-focused, the spillover effect into classical computing is real and valuable.
Corn
So what’s the takeaway for the person listening to this who isn't a physicist but is responsible for a tech budget? Is it time to open an AWS Braket account, or should they wait for the "Quantum Winter" to thaw?
Herman
My advice would be: treat it like an educational expense, not an operational one. If you have a thousand dollars a month to spare, letting a senior dev spend twenty hours a month on IBM Quantum is a fantastic investment in "Optionality." You’re buying the ability to understand the world five years from now. But if you think you’re going to solve your supply chain issues by hitting a "Quantum" button in twenty twenty-six, you’re going to be very disappointed and very broke.
Corn
It’s a "Stay Curious, Stay Cautious" situation. I think the most interesting thing will be to see if any of these providers "blink" first. If Google suddenly drops their prices or if Amazon buys a hardware company outright, that will be the signal that the "pilot phase" is finally ending.
Herman
Or if we see a "Quantum-Native" startup actually disrupt a legacy industry. We’re seeing some interesting stuff in battery chemistry—new electrolyte designs that were simulated using quantum methods. If a new solid-state battery hits the market and it was designed on Azure Quantum, that’s the "Aha!" moment that changes the narrative from "expensive research" to "industrial necessity."
Corn
"Powered by Quantum" stickers on our phones. I can see it now. Well, Herman, I think we’ve effectively demystified the cloud-based ghost in the machine. It’s real, it’s expensive, and for most of us, it’s still very much a work in progress.
Herman
A work in progress with a very high-end price tag. But that’s the price of the frontier, Corn.
Corn
Well said. I think that covers the landscape for now. Thanks to Daniel for the prompt—it’s always good to dig into the high-rent district of the cloud. Thanks as always to our producer, Hilbert Flumingtop, for keeping the wheels on this thing. And a big thanks to Modal for providing the GPU credits that power the generation of this show—even if those GPUs aren't quite "quantum," they're doing some heavy lifting for us today.
Herman
They certainly are. This has been My Weird Prompts. If you want to keep up with the show and see where the quantum rabbit hole goes next, find us at myweirdprompts dot com. You can subscribe to the R-S-S feed there or find all the links to follow us on your favorite platform.
Corn
And if you're enjoying the show, a quick review on your podcast app really does help us out. It helps the algorithms find us, which is ironically something a quantum computer might be very good at one day.
Herman
One day, Corn. One day.
Corn
Until then, keep your qubits coherent. Goodbye!
Herman
Goodbye!

This episode was generated with AI assistance. Hosts Herman and Corn are AI personalities.