Hey everyone, welcome back to My Weird Prompts. We are coming to you from a very sunny Jerusalem today. I am Corn, and I am here with my brother, the man who probably has more audio plugins on his phone than actual contacts.
Herman Poppleberry, at your service. And you are not wrong, Corn. I actually spent about three hours yesterday just testing different compression ratios on a recording of the wind. It is a sickness, I know, but it is a fun one.
Well, your obsession is actually going to be very useful today because our housemate Daniel sent us a prompt that is right in your wheelhouse. He has been recording these prompts while watching his son, Ezra, which I think is just the coolest way to multitask. But he is using a bluetooth headset and is starting to get curious about the production pipeline.
I loved hearing little Ezra in the background of that prompt. It adds a nice bit of texture, honestly. But Daniel is asking the big questions here. He wants to know about Android-based or cloud-based tools for light audio post-production. Specifically EQ, de-essing, and that gentle silence removal to tighten things up. And then he hit on the million-dollar question: does better audio input actually lead to better AI responses?
It is a fascinating thought. We always talk about prompt engineering in terms of the words we choose, but if the input is audio, does the engineering start at the microphone?
It absolutely does. But before we get into the heavy AI theory, let us tackle the practical stuff first. Daniel mentioned he is using an app called ASR for recording. For anyone not familiar, ASR is a fantastic Android app. It stands for Almighty Sound Recorder, and it is a staple for a reason. It gives you a lot of control over formats and bitrates right at the source. But as Daniel noticed, it is mostly a recorder, not a full-blown post-production suite.
Right, he is looking for that next step. That little bit of polish before it hits the AI model or the podcast feed. So, Herman, if you are stuck on an Android device and you want to do EQ and de-essing, where do you go?
If you want to stay on the device, there are a few heavy hitters. One that has been around forever but is still incredibly solid is WavePad. It looks a bit like a desktop program from the early two thousands, but it is powerful. It has full parametric equalization and noise reduction. But for what Daniel is describing, I actually think he should look at AudioLab.
AudioLab. I have seen that one. It has a very modular feel, right?
Exactly. It is like a Swiss Army knife for audio on Android. It has a dedicated noise remover, a silence remover, and an equalizer. The silence removal in AudioLab is actually quite customizable, which addresses Daniel's concern about sounding manic. You can set the threshold so it only clips out the true dead air without eating into the natural cadence of your speech.
That is the tricky part with automated silence removal. If you set it too aggressively, you lose the breath of the conversation. It starts to feel like a jump-cut YouTube video where the person never inhales.
Precisely. And Daniel mentioned that specifically. He said when he tried some automated truncation, it made him sound like he was speaking at a frantic pace. That usually happens because the software is cutting the milliseconds between words, not just the seconds between sentences. You want a tool that lets you define what constitutes silence. Usually, anything under thirty decibels for more than five hundred milliseconds is a safe place to start.
What about de-essing? That is a pretty specific request. For those who do not know, de-essing is just reducing those harsh s sounds, or sibilance, that can be really piercing, especially on cheaper microphones or bluetooth headsets.
De-essing on a mobile app is actually pretty rare to find as a dedicated, high-quality feature. Most mobile equalizers are too broad for it. You really need a dynamic equalizer or a dedicated de-esser that targets the five to eight kilohertz range only when those frequencies spike. This is where I think Daniel might want to look at the cloud instead of a native Android app.
That makes sense. The processing power required to do high-quality, transparent de-essing is actually quite significant if you want it to look at the waveform in real-time.
Exactly. And this leads us to what I consider the gold standard for what Daniel is asking for: Auphonic. They have an Android app called Auphonic Edit, but the real magic happens when you send that file to their web service.
I remember we used Auphonic back in the early days of the show. It is essentially an AI-powered sound engineer in a box.
It really is. You upload your raw file from ASR, and you can tell Auphonic to do everything Daniel mentioned in one go. It has an Intelligent Leveler that balances the volume so you are not constantly reaching for the knob. It has an AutoEQ that identifies problematic frequencies. And most importantly, it has a very sophisticated noise and hum reduction system.
Does it handle de-essing?
It does. It is part of their filtering algorithm. It looks for sibilance and plosives—those p pops—and smooths them out without making you sound like you have a lisp. And their silence removal is some of the best in the business because it uses a speech recognition layer to understand where the words are, so it does not cut you off mid-thought.
That sounds perfect for a workflow where you are recording on the go. But I want to pivot to the second part of Daniel's question because I think this is where the real weird prompt energy is. He is curious if providing better quality audio as an input would result in better responses from the AI model. He is specifically thinking about Gemini three, which is what we use to help process these episodes.
This is such a deep rabbit hole, Corn. I have been reading some papers on this lately, and the short answer is: yes, but maybe not for the reasons you think.
Okay, break that down. Most people assume that if the audio is clear, the AI hears the words better. Is it just about transcription accuracy?
That is the first-order effect. If you have a lot of background noise or a muffled bluetooth mic, the speech-to-text engine is going to have a higher Word Error Rate, or W E R. If the word not gets transcribed as now, or if a technical term gets turned into gibberish, the LLM is starting with a flawed premise. It is the classic garbage in, garbage out rule. If the text it receives is messy, its reasoning will be based on that mess.
But Gemini three is multimodal. It is not just transcribing and then reading; it can actually understand audio directly, right?
That is the second-order effect, and that is where it gets really interesting. Modern models like Gemini three use what we call native multimodal architectures. They are not just looking at a text transcript. They are processing the audio tokens directly. When the audio is high quality, the model has a much higher signal-to-noise ratio in its latent space.
Wait, so you are saying the noise in the audio actually creates noise in the AI's internal reasoning?
Think of it like this. When an AI processes an audio signal, it is trying to map those sounds to meanings. If the signal is clear, the probability distribution for what you said is very sharp. The model is ninety-nine percent sure you said post-production. But if there is a baby crying or a lot of bluetooth compression artifacts, that distribution flattens out. The model might only be sixty percent sure you said post-production and forty percent sure you said something else.
So the model is essentially spending some of its cognitive effort or its attention mechanism just trying to decipher the input rather than reasoning about the content?
Exactly! This is something a lot of people miss. In a transformer-based architecture, the attention mechanism is finite. If the model has to use its attention layers to disambiguate noisy input, it has less bandwidth left over to make deep connections between the ideas you are presenting. We have seen this in benchmarks where models perform significantly better on complex reasoning tasks when the input is clean versus when it is noisy, even if the meaning is technically still there in both versions.
That is an incredible insight. It is almost like talking to a person in a loud bar. I can hear what you are saying, but I am so focused on filtering out the music and the other voices that I am not really processing the nuance of your argument as well as I would in a quiet room.
That is a perfect analogy. And there is an even deeper level to this. High-quality audio preserves what we call paralinguistic information. This is the tone, the emphasis, the tiny pauses, and the inflection in your voice.
And Gemini three actually picks up on that?
It does. When Daniel speaks with a certain emphasis on a word like manic, the model perceives that emphasis. If the audio is heavily compressed or noisy, those subtle frequency shifts that convey emotion and intent get washed out. By providing high-quality audio, Daniel is giving the AI more context. He is giving it the vibes as well as the words. And because these models are trained on vast amounts of human interaction, they use that paralinguistic data to inform their tone and the depth of their response.
So if he sounds more professional and clear, the AI might actually respond in a more professional and clear manner because it is mirroring the quality of the input?
There is definitely a mirroring effect. We see this in text prompts all the time—if you write a sloppy prompt with typos, the AI tends to give a sloppier response. The same applies to audio. A high-fidelity, well-produced audio prompt signals to the model that this is a high-stakes, high-quality interaction. It sets the context window to a higher standard.
That is fascinating. So by doing a bit of EQ and noise removal, Daniel isn't just making it easier for us to listen to; he is actually prompt engineering the AI's internal state.
Precisely. Now, there is one caveat here. You can actually over-process audio for an AI.
Oh, really? How so?
If you use really aggressive AI noise removal—like some of the early versions of Adobe Podcast Enhance—it can sometimes create these glassy artifacts. It makes the voice sound almost synthetic. To a human, it sounds clean, but to an AI model, those artifacts are weird, non-human frequency patterns that were never in its training data.
So you might actually be introducing a new kind of noise that confuses the model even more.
Exactly. That is why Daniel's instinct for light post-production is exactly right. You want to remove the distractions—the hum, the long silences, the piercing s sounds—but you want to keep the fundamental character of the voice intact. You want the signal to be pure, not reconstructed.
So let us get practical for him. If you were Daniel, and you just finished recording a prompt on your phone while Ezra is finally napping, what is the step-by-step to get this to the ideal state for the AI?
Okay, here is the Poppleberry-approved workflow. Step one: Keep using ASR. It is a great recorder. Record in a lossless format if you can, like W A V or a high-bitrate F L A C. Avoid M P three at the recording stage if you have the storage space.
Why W A V specifically?
Because every time you compress to M P three, you are throwing away data. You want the AI to have every single bit of information possible. You can always compress it later for the final podcast, but for the input, you want it raw.
Got it. Step two?
Step two: Use a cloud-based processor for the heavy lifting. I would honestly point him toward Adobe Podcast Enhance but with a warning. They recently released a version two that gives you a strength slider. If you set it to one hundred percent, it sounds like a robot. But if you set it to about forty or fifty percent, it does this amazing job of removing room echo and background noise while keeping the voice natural.
And what about the silence removal?
That is step three. For that, I would actually use Auphonic. You can set up an Auphonic production where it automatically does the leveling, the AutoEQ, and the silence truncation. The great thing about Auphonic is that you can set a silence threshold. I would tell Daniel to set it to something conservative—maybe cut any silence longer than two seconds down to one second. That keeps the natural flow but removes the dead air where he is thinking or Ezra is being particularly cute in a non-verbal way.
And that should result in a file that is clean, professional, and carries all that paralinguistic data we talked about.
Exactly. It is the perfect balance. It makes the transcription nearly one hundred percent accurate, and it allows the multimodal layers of Gemini three to really feel the intent behind the words.
I love that we are at a point where feeling the intent is a technical term we can use for a computer program. It really shows how far we have come from the days of simple command-line interfaces.
It is a brave new world, Corn. And honestly, I think Daniel is ahead of the curve here. Most people are still just typing into a box. But the future of AI interaction is going to be voice-first, and understanding the acoustics of prompting is going to be a real skill.
The acoustics of prompting. We should trademark that.
I will add it to the list right after Herman's High-Fidelity Hum Removal.
You know, thinking about this from the listener's perspective, there is also a psychological element. When we listen to Daniel's prompts, if they sound clear and well-produced, we are more likely to take the ideas seriously. It creates a sense of authority. I wonder if the AI is essentially trained on that same human bias.
Oh, almost certainly. Think about the data these models are trained on. They are trained on professional podcasts, high-quality audiobooks, and well-produced YouTube videos. In those contexts, high audio quality is correlated with high-quality information. The model has likely learned that clean audio equals credible source. It is a heuristic that humans use, and since the AI is a mirror of human data, it uses it too.
That is a bit meta. The AI isn't just hearing the words better; it is trusting the source more because the source sounds like a professional.
It sounds wild, but that is how these latent associations work. If you sound like a professional broadcaster, the model's internal weights might shift toward its professional broadcaster training data, which tends to be more structured and articulate. You are literally nudging the AI into a more sophisticated persona by the way you sound.
This is why I love this show. We start with how do I remove silence on my phone and we end up at the AI's perception of human authority based on frequency response.
That is the My Weird Prompts guarantee! But to bring it back down to earth for Daniel, the tools are there. Android has the apps, the cloud has the power, and the AI has the ears—or the tensors—to appreciate the effort.
Well, I think Ezra would appreciate a dad who sounds like a radio star, too. So, Daniel, there is your homework. Get that Auphonic account set up, play with the Adobe Enhance slider, and let us see if the responses we get from the next prompt are fifty percent more thoughtful.
I am genuinely curious to see if we can tell the difference in the script output. We should do a blind test sometime. One raw prompt, one produced prompt, and see which one Gemini three likes better.
That would be a great experiment for a future episode. The Battle of the Bitrates.
I am in. But for now, I think we have given Daniel enough to chew on. And hopefully, for all the other creators out there recording on their phones, this gives you a bit of a roadmap. You do not need a ten-thousand-dollar studio to get high-quality AI interactions. You just need a bit of smart processing.
Absolutely. And hey, if you are listening to this and you have found some other weird or wonderful ways to use audio with AI, we want to hear about it. Go to myweirdprompts.com and use the contact form, or find us on social media. We are always looking for the next deep dive.
And if you have a second, leave us a review on Spotify or Apple Podcasts. It really does help the show grow and helps other people find these weird conversations.
Yeah, a quick rating makes a huge difference. We really appreciate all of you who have been with us for the last five hundred plus episodes. It has been a wild ride.
Five hundred eighty-seven episodes, Corn. Not that I am counting.
Of course you are counting, Herman Poppleberry. Of course you are.
Guilty as charged. Alright, I think that is a wrap for today.
Thanks for listening to My Weird Prompts. You can find all our past episodes and the RSS feed at myweirdprompts.com.
Stay curious, keep prompting, and maybe give your audio a little EQ love this week.
See you next time.
Goodbye everyone!