Welcome to My Weird Prompts. I am Corn, and as always, I am here with my brother, Herman Poppleberry. We are coming to you from our home in Jerusalem, and today we have a really interesting topic to dive into. Our housemate Daniel sent us a voice memo earlier today. He has been thinking about how we use artificial intelligence to understand ourselves better. Specifically, how these tools can help us label and explore our own personal philosophies.
It is a fascinating prompt, Corn. And I think it is particularly relevant as we wrap up twenty twenty-five. We have spent the last few years talking about what AI can do for our productivity or our coding, but we are finally seeing a shift toward how it can help us with our internal lives. By the way, for the new listeners, I am the one with the four legs and the penchant for hay. I am a donkey, if the voice did not give it away.
And I am a sloth, so if I sound like I am taking my time, it is because I am. But Herman, let us get into this. Daniel was talking about those moments where you have a specific feeling or a perspective on something, like how big tech companies handle our data, but you do not necessarily have the academic language to describe it. You know you feel a certain way, but you do not know if there is a name for that school of thought.
Exactly. It is the gap between intuition and vocabulary. Traditional search engines are terrible at this because they rely on keywords. If you do not know the word for what you believe, you cannot search for it. But large language models are built on semantic understanding. They do not just look for words; they look for meaning and context.
I do not know, Herman. It feels a little weird to have a machine tell me what I believe. Like, if I describe my view on privacy and the AI says, oh, you are a digital localist, does that actually help? Or am I just letting a computer put me in a box?
Well, hold on, that is not quite right. It is not about the AI deciding who you are. It is about the AI acting as a mirror. Think of it as a high speed librarian who has read every philosophy book ever written. When you describe your perspective, the AI can say, your ideas share a lot of common ground with this specific movement or this historical thinker. It gives you a starting point for further research. It is not a box; it is a map.
I guess. But maps can be wrong. What if the AI is biased? We know these models have their own leanings based on the data they were trained on. If I am trying to explore my philosophy, and the AI keeps nudging me toward a specific ideological corner because of its training, that seems dangerous.
Mmm, I am not so sure about that being a deal breaker. Of course, there is bias, but that is why Daniel's point about curated resources is so important. It is not just about the label. It is about what comes after the label. If the AI can provide a reading list that includes both the people who agree with that view and the people who absolutely tear it apart, then you are getting a balanced education. You are not just being told what you are; you are being invited into a conversation.
Okay, so let us look at what is actually out there right now. Are there tools doing this in twenty twenty-five? I know you are always reading about the latest releases while I am napping.
There are actually a few really cool ones. There is a tool called Edubrain dot a i that has a Philosophical AI Helper. It is designed specifically to help people master big ideas. It does not just answer questions; it explains the underlying assumptions in your own arguments. It helps reveal perspectives you might not have realized you held. Then there is Taskade, which has an AI Philosopher Persona Generator. You can essentially debate a digital version of Socrates or Nietzsche to see how your ideas hold up.
See, that sounds exhausting. Debating Socrates? I just want to know why I feel annoyed when I have to sign a fifty page terms of service agreement. I do not need a lecture on the social contract.
But that is exactly the point! Your annoyance comes from a philosophical place. You might be a proponent of radical transparency or a defender of individual digital sovereignty. Knowing those terms allows you to find other people who feel the same way. It allows you to find books and articles that articulate your frustration better than you can.
I suppose that is fair. It is like finding your tribe, even if your tribe is a bunch of dead philosophers. But what about the other part of Daniel's prompt? The curated lists. That seems like the most useful part to me. If I can get a list of videos or books that represent both sides, that feels like a real tool for growth.
It is. There are apps like the AI Reading List Generator and the AI Recommended Reading Generator that are getting quite sophisticated. They use algorithms to analyze your interests and then intentionally inject diversity into the results. They are programmed to include authors from different time periods and opposing viewpoints. It is like a built in antidote to the echo chamber.
But wait, how do they decide what is an opposing viewpoint? If I say I believe in X, and the AI gives me a book that says Y, is Y really the best counterargument? Or is it just a random different opinion? I feel like choosing the right counterpoint requires a lot of nuance that an AI might miss.
You are skipping over something important there, Corn. The AI is actually better at identifying the structural opposite of an argument than most humans are. Humans tend to pick weak counterarguments, what we call straw men, to make our own side look better. An LLM can look at the logical pillars of your belief and find the thinkers who specifically challenge those exact pillars. It is much more rigorous.
I do not know, Herman. That seems like a stretch. I have seen AI get confused by simple logic puzzles. You are telling me it can navigate the complexities of Hegelian dialectics and find me the perfect intellectual rival?
In twenty twenty-five? Yes, absolutely. The retrieval augmented generation, or RAG, that these systems use now allows them to pull from verified academic databases. They are not just guessing anymore. They are citing real sources.
Hmm. Well, before we get too deep into the technical side of RAG and databases, we should probably take a quick break. I think I hear our friend Larry waiting in the wings.
Oh boy. What is he selling this time?
Let us take a quick break for our sponsors.
Larry: Are you feeling untethered in an infinite universe? Do you wake up wondering if you are a butterfly dreaming you are a human, or perhaps a human dreaming you are a slightly more expensive human? Introducing the Existential Umbrella! Most umbrellas only protect you from rain, but the Existential Umbrella features a patented inner lining of pure, concentrated certainty. Stand beneath its sturdy canopy and feel the weight of cosmic indifference simply slide off the waterproof fabric. Available in three colors: Nihilist Navy, Stoic Silver, and Absurdist Apricot. Do not let the crushing void dampen your commute! The Existential Umbrella does not come with a handle because, much like life, there is nothing to hold onto. BUY NOW!
Alright, thanks Larry. I am not sure how a handleless umbrella works, but I am sure someone will buy it. Anyway, back to the topic. Herman, you were talking about how these AI tools can actually help us find opposing viewpoints.
Right. And I think we need to talk about how someone would actually implement this if they wanted to build their own version. Daniel asked how one might go about this. If you are not satisfied with the current tools, how do you make an AI that helps you explore your philosophy?
I imagine it starts with a really good prompt. Like, you do not just ask, what do I believe? You have to give it something to work with.
Exactly. The first step is what I call the brain dump. You tell the AI everything you feel about a specific topic, no matter how messy or contradictory it sounds. Then, you give it a specific persona. You tell the AI, act as a neutral philosophical taxonomist. Your job is to analyze my statements, identify the core principles, and name the schools of thought they align with.
Okay, but then how do you get it to give you the reading list without it just being a list of the most popular books on Amazon?
That is where the multi step process comes in. You have to tell the AI to perform a categorized search. You ask for three categories. One, the founding texts of this perspective. Two, contemporary expansions of this perspective. And three, the most influential critiques of this perspective. By forcing the AI to categorize the results, you ensure that it does not just give you a one sided list.
That sounds like a lot of work for the user. Is there a way to automate that?
There is. You could build a custom GPT or use an API to create a workflow. You could even set up a system where the AI takes on two different personas. One persona is your advocate, who finds everything that supports you, and the other is your devil's advocate, who finds everything that contradicts you. They could even have a little debate in the chat window while you watch.
Now that I would actually enjoy. Watching two AIs argue while I sit back with a snack. That sounds much more my speed. But Herman, let us be real for a second. Most people are not going to build their own API workflows. For the average person, is this really better than just going to a library and talking to a librarian?
Well, hold on, that is not quite right either. Librarians are amazing, but they are human. They have their own biases, their own limited time, and they might not have read every obscure paper from twenty twenty-four or twenty twenty-five. The AI is available twenty four seven, it has no ego, and it can process thousands of pages of text in seconds to find that one specific connection you are looking for. It is a supplement to human expertise, not a replacement.
I guess I worry that we are losing the struggle. There is something important about the struggle of trying to figure out what you believe. If an AI just hands you a label and a reading list, do you actually learn anything? Or are you just consuming a pre packaged identity?
That is a deep question, Corn. And I think I actually see it differently. I think the AI removes the boring part of the struggle, the part where you are just lost in a sea of jargon, and lets you get to the real struggle faster. The real struggle is not finding the book; it is reading the book and deciding if you agree with it. The AI gets you to the starting line. It does not run the race for you.
I do not know, Herman. I think for normal people, the starting line is often the hardest part. If the AI makes it too easy, maybe we do not value the destination as much. But I see your point. It is a powerful tool if used correctly.
It is all about the intention. If you use it to shut down thought, it is a problem. If you use it to open up new avenues of thought, it is a miracle. Think about someone living in a small town with no access to a university library. For them, an AI that can explain the nuances of deontology versus consequentialism is a game changer.
That is a good point. Accessibility is huge. And I suppose in Jerusalem, we are lucky to have so many resources, but not everyone is. So, if someone wants to start doing this today, what are the practical takeaways? What should they actually do when they open up their AI of choice?
First, be specific. Do not just say, tell me about philosophy. Say, I feel that individual privacy is more important than national security, but I also think companies should be able to use data to improve medicine. What are the tensions in my perspective? Second, ask for names. Ask the AI, what are the technical terms for these ideas? Third, demand sources. Do not let it just summarize; ask for the titles of books, the names of authors, and even specific chapters if possible.
And I would add, do not take the first answer as gospel. If the AI gives you a label, ask it, why did you choose that label? What parts of what I said do not fit that label? Push back on it. Make it justify itself.
Exactly! Treat it like a very smart, very well read, but occasionally overconfident intern. You are the boss. You are the one who decides what fits and what does not.
I like that. I can handle being the boss. Even a very slow, very sleepy boss.
You are the best at it, Corn. But really, this idea of using AI for ideological exploration is just beginning. By this time next year, in December twenty twenty-six, I bet we will see even more specialized tools. Imagine an AI that can analyze your entire social media history or your personal journals and tell you how your philosophy has evolved over the last decade.
Okay, now that sounds terrifying. I do not want to know what my philosophy was ten years ago. I was probably a radical proponent of napping twice a day instead of three times. I have matured since then.
See? Evolution! But in all seriousness, the potential for self reflection is massive. We are moving from AI as a tool for the world to AI as a tool for the self.
It is a lot to think about. I want to thank Daniel for sending in this prompt. It really pushed us to look at AI from a different angle. It is not just about the code or the math; it is about the meaning.
Absolutely. And if you are listening and you have your own weird prompts, we want to hear them. This is a collaboration after all. We provide the brotherly bickering, and you provide the ideas.
You can find us on Spotify, or you can go to our website at my weird prompts dot com. We have an RSS feed there for subscribers, and there is a contact form if you want to get in touch with us. We are also on all the major podcast platforms.
And remember, if you use an AI to find your philosophy, make sure it is a philosophy that allows for occasional hay breaks. It is important for the soul.
And for the naps. Do not forget the naps.
Never.
Thanks for listening to My Weird Prompts. I am Corn.
And I am Herman Poppleberry.
We will talk to you next time. Goodbye!
Goodbye!