#679: The Sound of Secrets: Side-Channel Attacks in AI Clusters

Is your hardware whispering your secrets? Discover how side-channel attacks turn physical signals into data leaks in modern AI clusters.

0:000:00
Episode Details
Published
Duration
31:04
Audio
Direct link
Pipeline
V4
TTS Engine
LLM

AI-Generated Content: This podcast is created using AI personas. Please verify any important information independently.

In the latest installment of My Weird Prompts, hosts Herman and Corn Poppleberry take a nostalgic trip down memory lane that quickly pivots into a sobering discussion about the future of hardware security. The episode begins with a reflection on the "coil whine" and electrical hums of early desktop computers—sounds that most users dismissed as mere background noise, but which Herman identifies as the "physical manifestation of logic." In the world of 2026, where AI clusters consume enough power to light up entire zip codes, these physical leaks have evolved from minor annoyances into significant security frontiers.

The Physics of the Leak

The core of the discussion centers on "side-channel attacks." Unlike traditional hacking, which attempts to find flaws in mathematical algorithms or software code, a side-channel attack targets the physical implementation of that math. Herman uses the analogy of a high-tech safe: while a traditional hacker tries to guess the combination, a side-channel attacker puts a stethoscope to the door to listen for the clicks of the tumblers. In the context of a modern GPU or CPU, every flip of a transistor dissipates heat or creates a microscopic electromagnetic pulse. When billions of these events occur in sync, the resulting "noise" becomes a readable signal for those with the right tools.

From Academic Party Tricks to Reality

The brothers discuss the pioneering work of researchers like Mordechai Guri’s team at Ben-Gurion University. This group has demonstrated "Mission Impossible" style data extraction methods, such as "Fansmitter," which manipulates cooling fan speeds to broadcast data via acoustic frequencies, and "BitWhisper," which uses thermal fluctuations to allow two air-gapped computers to communicate. They even touched on "Air-ViBeR," a method of sending data through the vibrations of a desk, picked up by a nearby smartphone’s accelerometer.

However, Herman is quick to distinguish between these "party tricks" and the threats facing modern data centers. In a Tier Four data center, the sheer volume of ambient noise—thousands of screaming fans and massive industrial cooling systems—creates a "noise floor" so high that acoustic or vibrational attacks are nearly impossible for a remote attacker. For giants like AWS or Google, the physical security and environmental noise act as a natural shield against these specific localized exploits.

The Modern Battleground: Power and Timing

The real danger in 2026, according to Herman, lies in software-based side-channels. As AI models like Claude and GPT-5 require massive power draws—sometimes up to 100 kilowatts per rack—they create distinct electromagnetic and power signatures. Attackers no longer need physical access to a motherboard to measure these signals; they can often do it through the software itself.

Herman highlights the "PLATYPUS" attack as a prime example. By exploiting power management features intended to help developers optimize energy efficiency, researchers found they could monitor power consumption with such precision that they could recover cryptographic keys from supposedly secure "Trusted Execution Environments." Even when hardware vendors attempted to "fuzz" this data with artificial noise, attackers pivoted to "Hertzbleed." This exploit turns dynamic frequency scaling—the way a chip speeds up or slows down to manage heat—into a timing side-channel. Because the time it takes for a chip to change its clock speed can depend on the data being processed, an attacker can infer sensitive information simply by measuring how long a calculation takes.

The "Noisy Neighbor" in the Cloud

The episode concludes with a warning about the "noisy neighbor" problem in cloud computing. In a shared environment, multiple users often run processes on the same physical silicon. Even if a hypervisor perfectly isolates the memory of a secure AI model, that model still shares caches, execution units, and power delivery systems with other processes.

Herman argues that "Micro-architectural Side Channels" are the active battleground of 2026. By running a malicious process alongside a secure one, an attacker can "listen" to the heartbeat of a computation. They aren't breaking the encryption; they are feeling the ripples the computation leaves in the shared hardware. As we push toward chips with features measured in angstroms, these physical leaks only become more pronounced. The takeaway is clear: in the digital age, physics is the ultimate "leaky" variable that no amount of pure mathematics can fully contain.

Downloads

Episode Audio

Download the full episode as an MP3 file

Download MP3
Transcript (TXT)

Plain text transcript file

Transcript (PDF)

Formatted PDF with styling

Read Full Transcript

Episode #679: The Sound of Secrets: Side-Channel Attacks in AI Clusters

Daniel Daniel's Prompt
Daniel
In a previous episode, we discussed how closed-source models like Anthropic’s Claude can be hosted by third-party providers like AWS Bedrock using Trusted Execution Environments and computer enclaves to protect their IP. I’d like to explore the topic of side-channel attacks—extracting sensitive information by observing indirect physical signals, such as electrical oscillations or fan noises, rather than breaking the algorithm itself. Is this a credible threat vector for data center operators in 2026, or is it still mostly a theoretical risk?
Corn
You know, Herman, I was thinking about the sound our old desktop used to make back in the day. That high-pitched whine whenever it was processing a big file or loading a game. It almost felt like the computer was straining, right? Like you could hear it thinking. I remember sitting there in the dark, waiting for a level to load in Myst, and I could actually tell when the CD-ROM drive was about to spin down just by the change in the electrical hum coming from the speakers.
Herman
Oh, I remember that. It was the coil whine from the graphics card and the vibrating capacitors. Most people just found it annoying, but for a certain type of person, that sound is actually a goldmine of information. It is essentially the computer talking in a language it does not even realize it is speaking. It is the physical manifestation of logic. Every time a transistor flips, a tiny amount of energy is dissipated as heat, or it creates a microscopic electromagnetic pulse. When you have billions of those happening in a synchronized dance, the "noise" becomes a signal.
Corn
Well, that is the perfect lead-in for today. Welcome back to My Weird Prompts, everyone. I am Corn, and I am joined, as always, by my brother, who I am pretty sure can translate capacitor squeals into C plus plus code. We are coming to you on February eighteenth, twenty twenty-six, and the world of hardware is getting weirder by the second.
Herman
Herman Poppleberry here, at your service. And while I cannot quite translate it in real-time, I have certainly spent enough time reading the research to know that those squeals are far more dangerous than they sound. Especially now that we are packing more compute power into a single rack than we used to have in an entire zip code.
Corn
We have got a deep one today. Daniel's prompt this time covers side-channel attacks. Specifically, he wants to know if extracting sensitive information by observing physical signals like electrical oscillations or fan noises is a credible threat for data center operators here in twenty twenty-six, or if it is still just a cool party trick for academic researchers. He is looking at this through the lens of the massive AI clusters we are seeing everywhere now.
Herman
It is such a great follow-up to our discussion on trusted execution environments and those secure enclaves. We talked about how companies like Anthropic use things like AWS Bedrock to keep their model weights secret, even from the cloud provider. But Daniel is asking the million-dollar question: what if you do not need to break the digital lock if you can just watch the tumblers move through the keyhole? What if the "secure" room is leaking light under the door?
Corn
Right, and that is what a side-channel attack is at its core, right? It is not attacking the math. You are not trying to find a flaw in the AES two hundred fifty-six encryption or the neural network architecture. You are looking at the implementation of that math in the physical world. It is the difference between trying to solve a puzzle and just looking at the reflection of the pieces in the solver's glasses.
Herman
Exactly. Think of it like a safe. A traditional hacker is trying to guess the combination or find a flaw in the lock mechanism. A side-channel attacker is putting a stethoscope to the door and listening for the clicks. Or maybe they are measuring how much heat the safe radiates when you turn the dial. The math might be perfect, but the hardware is a physical object that obeys the laws of physics, and physics is very, very leaky. In twenty twenty-six, we are dealing with chips that have features measured in angstroms. The smaller and faster we get, the more these physical "leaks" become pronounced.
Corn
Daniel mentioned some of the research coming out of Israel, which we should probably touch on because some of that stuff sounds like it is straight out of a Mission Impossible movie. There is a group at Ben-Gurion University that has spent years finding the weirdest ways to get data out of air-gapped computers.
Herman
Oh, Mordechai Guri’s team. They are legendary in this space. They have demonstrated things like "Fansmitter," where they control the speed of the cooling fans to create specific acoustic frequencies. Basically, they turn the fan into a speaker that broadcasts data in the form of sound waves that a nearby phone can pick up. They even did "BitWhisper," which uses the thermal sensors and the heat generated by the CPU to communicate between two air-gapped computers sitting next to each other. One computer "talks" by heating up, and the other "listens" by measuring its own temperature fluctuations.
Corn
And they did one called "Air-ViBeR" too, right? Where they used the vibrations of the fans to send data through the table the computer was sitting on. You could have a smartphone on the same desk using its accelerometer to "feel" the data being sent. It is like digital Morse code through the furniture.
Herman
It is brilliant and terrifying. They even did "LED-it-GO," where they used the hard drive activity light to flicker out data at high speeds. But the big question Daniel is asking is: does this matter for a massive data center? I mean, if you are AWS or Google, you have thousands of servers in a room. The noise floor must be astronomical. It is not just one fan; it is ten thousand fans and a cooling system that sounds like a jet engine.
Corn
That is what I was thinking. If I am an attacker, how am I going to hear one specific fan in a sea of ten thousand fans screaming at sixty decibels? It feels like trying to hear a specific person whisper in the middle of a sold-out football stadium. Is that really a threat to a model like Claude or GPT-five?
Herman
That is the primary argument for why these specific acoustic or vibrational attacks are often seen as "theoretical" for the cloud. In a data center, you have massive industrial cooling systems, humongous power supplies, and rows upon rows of identical hardware. The signal-to-noise ratio is incredibly low. Plus, you have physical security. You cannot just walk into a Tier Four data center with a parabolic microphone and start recording. The "air-gap" attacks Guri's team does usually assume you have already compromised the machine with malware and just need a way to get the data out.
Corn
So, for the physical side, like sound and vibration, you are saying it is mostly a localized threat? Like, if I have a disgruntled employee with access to the rack, or if I am trying to jump an air-gap in a private lab?
Herman
Mostly, yes. But here is where it gets interesting for twenty twenty-six. We are not just talking about fans anymore. As we have moved toward these massive AI clusters, the power consumption is unprecedented. We are seeing racks that pull a hundred kilowatts of power. When you are switching that much current at the speeds required for a model like Claude or GPT-five, you create massive electromagnetic signatures. And those signatures can travel.
Corn
Okay, so you are talking about "TEMPEST" style attacks? Looking at the electromagnetic radiation coming off the power lines or the processors themselves?
Herman
Precisely. And while the noise in a data center is high, the signals we are looking for are very specific. There is a concept called "Differential Power Analysis." Even if you cannot see the individual bits, you can see the power spikes when a processor performs a specific operation. If an AI model is processing a specific prompt, the sequence of "multiply-accumulate" operations creates a power signature. In twenty twenty-four and twenty twenty-five, researchers showed that you could actually distinguish between different layers of a neural network just by looking at the power draw.
Corn
But wait, how does an attacker get onto the power rail of a secure server in a cloud data center? That still requires physical access, right? You would have to clip a probe onto the motherboard.
Herman
Not necessarily. And this is the "aha" moment for side-channels in the modern era. We have discovered that you can often measure these physical properties through software. This is what we call a "software-based side-channel."
Corn
Wait, how do you measure power consumption through software? I thought that was a hardware thing.
Herman
Most modern CPUs and GPUs have power management features that allow the operating system to monitor energy usage. There was a famous attack called "PLATYPUS" a few years ago. It used the "Running Average Power Limit" interface in Intel processors. This was a feature meant to help developers make their code more energy-efficient. But researchers found that the power readings were so precise—down to the microjoule—that you could use them to recover cryptographic keys from inside a Trusted Execution Environment. You do not need a voltmeter if the CPU is literally telling you how much power it is using every millisecond.
Corn
So the very tool meant to help you optimize your code becomes a window for an attacker to see what the hardware is doing. That is wild. Does that still work in twenty twenty-six?
Herman
The hardware vendors have tried to patch it by "fuzzing" the data—basically adding artificial noise to the power readings so they are not precise enough to be used for an attack. But then attackers found "Hertzbleed." This one was a real mind-blower because it turned a performance feature into a security nightmare.
Corn
I remember the name. That had to do with frequency scaling, right? The way the chip speeds up or slows down?
Herman
Yes! Most modern chips use "Dynamic Voltage and Frequency Scaling" to save power. When the processor is doing heavy work, it clocks up. When it is idle, it clocks down. The researchers realized that for certain operations, the time it takes for the frequency to change depends on the data being processed. So, by measuring how long a calculation takes, you can infer information about the data, even if you are blocked from seeing the data itself. It turns a "power" side-channel into a "timing" side-channel.
Corn
So, even if I am in a "Secure Enclave" where the memory is encrypted and the provider cannot see my code, the fact that the chip is getting slightly warmer or changing its clock speed is visible to other processes on the same machine?
Herman
Exactly. This is the "noisy neighbor" problem on steroids. In a cloud environment, you are often sharing a physical CPU with other users. Even if the hypervisor perfectly isolates your memory, you are still sharing the same silicon. You are sharing the same caches, the same execution units, and the same power delivery system. If I am running a malicious process on the same chip as your secure enclave, I can "listen" to the heartbeat of your computation by seeing how it affects the resources we both use.
Corn
Okay, so let's bring this back to Daniel's question about the credibility of the threat. If I am a data center operator in twenty twenty-six, am I actually worried about someone extracting Claude's model weights using fan noise?
Herman
Fan noise? Probably not. The physical isolation and the ambient noise of the data center make that nearly impossible for a remote attacker. But am I worried about "Micro-architectural Side Channels"? Absolutely. That is the real battleground. This is where the threat is not just credible; it is active.
Corn
Define that for me. What makes it "micro-architectural"?
Herman
It means you are looking at the tiny components inside the chip. The branch predictors, the L-one and L-two caches, the instruction pipelines. We have seen a constant stream of these attacks—Spectre and Meltdown were the big ones that started the craze back in twenty eighteen, but it has not stopped. Every year, we find a new way that one process can "feel" what another process is doing by seeing how it affects the shared hardware. In twenty twenty-four, we had "GPU dot zip," which showed that data compression in modern GPUs could leak visual information from a browser.
Corn
So, if I am running a massive AI model, and I am worried about my intellectual property, the threat is not a guy with a microphone outside the building. The threat is another virtual machine running on the same physical chip that is carefully timing how long it takes to access the cache.
Herman
Precisely. And in twenty twenty-six, the stakes are higher because we are using specialized hardware. We are using massive GPU clusters and "TPUs" or Tensor Processing Units. These chips are designed for one thing: massive matrix multiplication. Because they are so specialized, their power and timing signatures are very distinct. If I know you are running a transformer model, I know exactly what the "heartbeat" of that computation looks like. If I can see even a tiny bit of that signature through a side channel, I might be able to figure out the specific parameters of the model you are running.
Corn
This feels like one of those things where the defense is almost harder than the attack. If the "leak" is a fundamental property of how electricity moves through silicon, how do you even stop that? You cannot just tell the electricity to stop being electrical.
Herman
It is incredibly difficult. You have a few options. One is "Constant Time Programming." You write your code so that every operation takes exactly the same amount of time, regardless of the data. No "if-then" statements that change the execution path. But that is incredibly slow and hard to do for something as complex as a large language model. Imagine trying to run a trillion-parameter model where every single calculation has to wait for the slowest possible outcome just to stay synchronized.
Corn
Right, you would basically be throwing away all the optimizations that make modern AI possible. We would be back to the speeds of twenty twenty-one.
Herman
Exactly. The other option is "Hardware Partitioning." This is what we are seeing more of in twenty twenty-six. Instead of just "Virtual Machines," cloud providers are offering "Bare Metal" instances or "Air-Gapped Racks" where you are the only tenant on that physical hardware. If there are no "neighbors" on the chip, there is no one to listen to the side channel. This is what the big players like Anthropic are demanding now for their most sensitive training runs.
Corn
But that is expensive. The whole point of the cloud is the efficiency of sharing resources. If everyone needs their own physical island of silicon, the costs go through the roof.
Herman
That is the trade-off. Security versus cost. For someone like Anthropic or OpenAI, paying for physical isolation is a no-brainer to protect their core IP. But for a smaller company using a third-party provider, they might be taking a calculated risk. They are betting that the "noise" of the data center is enough to hide their "signal."
Corn
Let's talk about the "Misconception Busting" aspect of this. I think a lot of people hear "side-channel attack" and they think of a hacker in a hoodie with a soldering iron. But what you are describing is much more abstract. It is more like a data scientist with a PhD in statistics.
Herman
You are right. The biggest misconception is that side-channel attacks require physical proximity. In the early two thousands, that was mostly true. You needed an oscilloscope and a probe on the motherboard. But today, the most dangerous side channels are "Remote Side Channels." If I can send a network packet to a server and measure exactly how many microseconds it takes to respond, I am performing a side-channel attack.
Corn
Wait, really? Just a simple "ping" or a web request can be a side-channel?
Herman
Oh, absolutely. There was a classic attack where researchers could figure out the private key of a web server just by measuring the tiny variations in how long it took the server to perform the RSA decryption during the handshake. We are talking about differences of nanoseconds. But if you send enough requests—we are talking millions of requests—the statistical noise clears up, and the key emerges. It is like listening to a leaky faucet in a thunderstorm. If you listen long enough, you can figure out the rhythm of the drips.
Corn
That is the part that always blows my mind. The "statistical" part. You do not need to get it right the first time. You just need to observe it ten thousand times and average out the results.
Herman
Exactly. It is all about "Signal Processing." And in twenty twenty-six, attackers have their own AI models to help them analyze the noise. If you have a noisy power signature from a data center, you can train a neural network to filter out the background noise and find the specific patterns of a cryptographic operation. We are using AI to attack the hardware that runs AI. It is a bit of a "snake eating its own tail" situation.
Corn
So, let's look at the downstream implications. If side-channel attacks are a credible threat, does that mean the whole idea of "Confidential Computing" in the cloud is a lie? Are these "Secure Enclaves" actually secure?
Herman
I would not say it is a lie, but it is an "arms race." A Trusted Execution Environment like Intel SGX or AMD SEV provides a very high level of protection against traditional attacks. It encrypts the memory so even if the cloud provider's administrator tries to look at it, they see gibberish. That is a massive win. But it does not hide the "metabolic rate" of the processor. It does not hide the fact that the chip is working hard.
Corn
It is like having a soundproof room with a glass window. You cannot hear what the people inside are saying, but you can see them gesturing and moving around. You can see how many people are in the room and how fast they are moving. You can infer a lot from that.
Herman
That is a perfect analogy. And for some high-value targets, that inference is enough. If I am a nation-state actor and I want to know if a specific company is training a model on a certain type of data, I might not need the exact weights. I just need to see the "shape" of the computation to confirm my suspicions. In twenty twenty-five, there was a paper showing that you could identify which specific "fine-tuning" dataset was being used just by looking at the memory access patterns of the GPU.
Corn
So, for Daniel's question, is this a credible threat for data center operators?
Herman
It is credible enough that the major providers are spending billions on it. If you look at the white papers from AWS, Azure, and Google Cloud, they are obsessed with "side-channel mitigation." They are constantly updating their hypervisors to flush the caches between users. They are disabling features like "Simultaneous Multithreading"—what Intel calls Hyper-Threading—because it is a notorious source of side-channel leaks.
Corn
Wait, they are disabling Hyper-Threading? That is a huge performance hit! That is like taking a four-lane highway and closing two lanes for "security."
Herman
It is. For some high-security workloads, they essentially turn off half the logical cores of the processor to prevent one process from spying on its "twin" on the same physical core. That tells you exactly how seriously they take this. They are willing to sacrifice thirty percent of their compute power just to close a side channel. You do not make that kind of sacrifice for a "theoretical" risk.
Corn
That is a great data point. You do not sacrifice thirty percent of your product's performance for a "theoretical" risk. You do it because you know the risk is real and your customers are demanding protection.
Herman
Exactly. Now, for the average developer building a CRUD app or a simple website, side-channel attacks are probably not on their threat model. The effort required to pull off an attack is too high for the reward. But if you are handling billions of dollars in crypto-assets, or if you are hosting the "crown jewels" of a trillion-dollar AI company, side-channels are at the very top of your list.
Corn
It feels like we are moving toward a world where "hardware diversity" becomes a security feature. If every server in a data center is identical, the side-channel signature is the same everywhere. But if you have a mix of different architectures, it becomes much harder for an attacker to build a reliable model of the "leakage."
Herman
That is an interesting thought. Although, from an operational standpoint, data center managers hate diversity. They want everything to be "homogenous" so it is easier to manage and replace. But you are right—predictability is the enemy of security. If I know exactly what an H-two hundred GPU looks like when it is doing a matrix multiply, I can attack any H-two hundred in the world.
Corn
Let's do a quick thought experiment. Imagine it is five years from now, twenty thirty-one. We have "Quantum Side Channels." Is that a thing?
Herman
Oh, do not even get me started. Quantum computers have their own set of physical leakages. In fact, some of the most successful attacks on early quantum prototypes have been side-channel attacks—measuring the magnetic fields or the temperatures required to keep the qubits stable. But let's stay in twenty twenty-six for now. The "hot" topic right now is "Optical Side Channels."
Corn
Optical? Like, looking at the blinking lights?
Herman
Not just the lights. There was a paper recently where researchers used a high-speed camera to watch the "power LED" on a set of speakers. They found that the minute fluctuations in the brightness of the LED were correlated with the sound being played by the speakers. They could actually reconstruct the audio from a video of the LED.
Corn
No way. That is insane. You could "hear" a room just by looking at a light through a window?
Herman
Exactly. Now, apply that to a data center. Most servers have a "Status" or "Activity" LED. If you have a camera in the room—maybe a security camera that has been compromised—you could potentially use the flickering of those LEDs to exfiltrate data. Even the "liquid cooling" systems we are seeing in twenty twenty-six could be a side channel. The flow rate of the coolant or the vibration of the pumps changes based on the thermal load of the CPU. If you can measure the "pulse" of the cooling system, you can measure the "pulse" of the data.
Corn
It really makes you realize that "Information Theory" is not just about bits and bytes. It is about the flow of energy. Any time energy is transformed, information is leaked. It is a law of the universe.
Herman
That is deep, Corn. And it is fundamentally true. The second law of thermodynamics basically says you cannot do anything without creating "waste" in the form of heat or disorder. And that waste always carries a "memory" of the process that created it. Side-channel attacks are just the art of reading that memory.
Corn
So, what are the practical takeaways for our listeners? If I am a CTO or a lead architect, and I am listening to this, should I be panicking?
Herman
No panic necessary. But you should be "Side-Channel Aware." First, if you are using the cloud, understand what "Isolation Guarantees" your provider is actually giving you. Are you on a shared instance? Are you in a TEE? If so, what is their policy on flushing caches or disabling Hyper-Threading? If you are on a "multi-tenant" GPU, you are much more vulnerable than if you are on a dedicated instance.
Corn
Second, if you are writing cryptographic code or handling extremely sensitive data, do not try to roll your own. Use well-vetted libraries that are specifically designed to be "constant-time." The people who write those libraries are specialists who spend their lives thinking about these nanosecond leaks.
Herman
And third, think about "Physical Security" even for your "digital" assets. If you are running your own hardware, the layout of your racks, the shielding of your cables, and even the "acoustic treatment" of your server room can make a difference. In twenty twenty-six, we are seeing some high-security facilities actually using "white noise" generators inside the server racks to mask the acoustic and electromagnetic signatures.
Corn
I also think there is a takeaway for AI developers. If you are deploying a model into a "Trusted Environment," remember that the "Trust" is not absolute. If your model is a "Black Box" that people can query, they might be able to use "Inference Side Channels"—just looking at the time it takes for the model to respond to different prompts to figure out how it is built.
Herman
Oh, that is a great point. "Timing Attacks" on AI inference are a huge area of research right now. If a certain type of prompt triggers a specific branch in your neural network that takes longer to compute, I can use that to map out the structure of your model. It is like playing twenty questions with the hardware.
Corn
So, rate-limiting and adding a bit of "jitter" to your response times might actually be a security feature, not just a way to manage traffic. You are intentionally making the "window" blurry so the attacker cannot see the tumblers moving.
Herman
Exactly. By intentionally making your system a little bit "noisier" and less predictable, you make it much harder for an attacker to find the signal. It is the "security through chaos" approach.
Corn
Well, Herman, I think we have thoroughly explored the "leaky" world of side-channels. It is fascinating to think that even in this ultra-digital age, the physical world still has a way of poking its head in. We try so hard to live in the world of pure logic, but we are still tethered to the world of copper and silicon.
Herman
It always does. We are biological machines living in a physical universe. No matter how much we try to abstract things into ones and zeros, the "hardware" always matters. Physics is the ultimate root of trust, but it is also the ultimate leak.
Corn
Definitely. And hey, if you have been enjoying our deep dives into the weird and wonderful world of tech, we would really appreciate it if you could leave us a review on Spotify or Apple Podcasts. It really helps the show reach new people who might be interested in hearing about capacitor squeals and flickering LEDs. We are trying to grow the community of "weird prompt" enthusiasts.
Herman
Yeah, it genuinely makes a difference. We love seeing those reviews come in. It is the only "side-channel" we have to know if you guys are actually enjoying the show!
Corn
You can find us at myweirdprompts dot com, where we have our full archive of episodes—we are up to six hundred sixty-nine now, which is just wild. There is an RSS feed there if you want to subscribe, and a contact form if you want to send us your own thoughts on the show. We have got transcripts for everything too, if you want to read along.
Herman
And you can always reach us directly at show at myweirdprompts dot com. We would love to hear if any of you have ever actually "heard" a side-channel in the wild, or if you have a prompt that is even weirder than this one.
Corn
Thanks again to Daniel for the prompt. It really opened up a rabbit hole we did not expect to go down today. This has been My Weird Prompts.
Herman
See you next time! Keep your fans quiet and your caches flushed!

This episode was generated with AI assistance. Hosts Herman and Corn are AI personalities.