Daniel sent us this one — and it's a follow-up, a kind of gentle pushback on something we kicked around in an earlier conversation. He's pointing out that different providers have really rigid loyalties to the modalities they practice, which means if you ask your average therapist what might work for you, you're getting an answer filtered through a pretty strong bias. His idea is that detaching the matching process from the treatment provider could be genuinely worthwhile. And he's asking what tools exist, or could exist, for making that process less arduous — whether it's a human therapy Sherpa, some kind of AI-driven pointer mechanism, or even just a faster trial-and-error cycle with short-burst sessions across different modalities.
This gets at something I've been chewing on since I left practice. The matching problem really is three-dimensional — you've got the patient, the therapy, and the therapist. And that third dimension, the therapist, is the wildcard. The same modality delivered by two different people can feel completely different. But Daniel's right that the starting point is often arbitrary. Someone calls a clinic, gets assigned to whoever has availability, and that person does cognitive behavioral therapy because that's what they do. The patient doesn't know there are other options.
Right, and that's the thing — the patient doesn't know what they don't know. They might spend six months in psychoanalysis when something more structured and time-limited would have served them better. Or they do eight weeks of CBT and feel like it didn't address the deeper stuff, and they walk away thinking talk therapy doesn't work. Meanwhile, there's probably a modality that would have clicked.
By the way, DeepSeek V four Pro is writing our script today. So if anything sounds unusually sharp, that's why.
I'll adjust my expectations accordingly.
Let's get concrete. Daniel's asking whether anything like this exists — a therapy navigation tool that's useful, not just a symptom checker that spits out a generic recommendation. And I've been tracking this space. There are a few things worth naming. One is a platform called Lyssn, which came out of research at the University of Washington. It uses natural language processing to analyze therapy session transcripts and provide fidelity ratings — basically, it checks whether the therapist is actually delivering the modality they claim to be delivering. It's not a matching tool for patients directly, but it's the kind of quality assurance layer that any serious matching system would need underneath it.
It's more of an auditing tool than a navigation tool. It tells you whether the therapist is doing what they say they're doing, but it doesn't help you figure out what you need in the first place.
And that audit function turns out to be really important, because one of the dirty secrets of the field is that a lot of therapists say they do CBT or ACT or whatever, but when you actually look at session recordings, they're doing something much more eclectic. Sometimes that eclecticism is skillful. Often it's just drift. So any matching tool has to contend with the fact that the label on the tin doesn't always match what's inside.
Which makes Daniel's trial-and-error idea more interesting. If you can't trust the labels anyway, maybe the smarter approach is to let people sample different approaches quickly and cheaply, and use their own experience as the filter.
That's where the online-delivered therapy piece comes in. There's a platform in the UK called Ieso, which delivers text-based CBT through typed conversations with licensed therapists. Because everything is typed, every session produces a complete transcript that can be analyzed. They've published research showing they can predict treatment outcomes from session data with fairly impressive accuracy. The relevant bit for Daniel's question is that they've also started looking at whether linguistic markers in early sessions can indicate whether a patient would respond better to a different approach. It's not a full matching algorithm yet, but the pieces are being assembled.
I want to dig into the practical question, though. Daniel's describing something pretty specific — he wants a system where you have a short interview, maybe five minutes, and it says: based on your thought patterns, your presenting problem, and your history, you're best suited to ACT. Here are providers in your area. Is anyone actually building that, or is this still in the research phase?
There are a few attempts. The one that's gotten the most attention is a team at Stanford led by Adam Chekroud, who co-founded a company called Spring Health. They use a machine learning model trained on a large dataset of patient outcomes to match people to specific therapists and modalities. The model takes in a bunch of intake data — symptom measures, demographics, treatment history — and generates a recommendation. They've published results showing that patients who followed the model's recommendation had significantly better outcomes than those who didn't. We're talking about a thirty to forty percent improvement in recovery rates.
That's substantial. What's the catch?
The catch is that it's an enterprise product. Spring Health sells to employers and health plans, not directly to consumers. So unless your employer has signed up, you can't just go to a website and use it. And the matching is partly to a therapist and partly to a care pathway — it might recommend in-person therapy, or coaching, or digital self-help. The modality-specific matching is there, but it's embedded in a broader system.
It exists, but it's behind a corporate gate. That's frustrating. What about something more directly accessible?
There's a site called What Therapy, based in the UK, which is probably the closest thing to what Daniel's describing as a therapy Sherpa. It's a consumer-facing tool where you answer a series of questions about what you're struggling with, what your goals are, and what kind of approach appeals to you — practical and structured versus exploratory and open-ended, that kind of thing. Then it gives you a personalized report recommending specific modalities, with explanations of why each one might fit. It doesn't match you to a specific provider, but it does the educational piece. It tells you what's out there and why you might consider one approach over another.
That's useful, but it's still self-report. The patient is describing themselves, and we know self-report has limits — people don't always have accurate insight into their own cognitive patterns. Daniel's more ambitious vision involves the system actually assessing thought patterns through some kind of interview or interaction.
That's the frontier. There's a research group at the University of Pennsylvania that's been working on something called the Personalized Psychotherapy Selection study. They use a battery of assessments — not just symptom questionnaires, but cognitive tasks, personality measures, and even neuroimaging in some cases — to build a profile and then match to treatment. The early results suggest you can predict differential response to CBT versus interpersonal therapy with about seventy percent accuracy, which is way better than chance. But it's research-grade, not ready for prime time. The assessment takes hours, and the neuroimaging part is obviously not scalable.
The beautiful system Daniel's imagining — the five-minute interview that nails your modality — is probably a ways off. But I'm not sure that means we can't build something useful right now. Let's talk about the less elegant implementation he mentioned. The trial-and-error speed-dating model.
I love this idea, and I think it's actually more achievable than the algorithmic matching approach, at least in the near term. The concept is: instead of committing to eight or twelve or twenty sessions with one therapist doing one modality, you do a series of short engagements — maybe two or three sessions each — with different providers who specialize in different approaches. You get a taste of what the work actually feels like. Then you make an informed choice.
The open day analogy works. When you're picking a university, you visit campuses, you sit in on lectures, you talk to current students. Nobody expects you to commit four years of your life based on a brochure. But with therapy, we expect people to commit thousands of dollars and months of emotional labor based on basically no direct experience of what they're signing up for.
The financial piece matters here. Daniel mentioned it, and he's right. Even in systems with decent coverage, there are gaps. In the US, the average out-of-pocket cost for a therapy session is between a hundred and two hundred dollars. If you're doing weekly sessions, that's four hundred to eight hundred a month. The idea of spending two months and three thousand dollars just to discover that psychoanalysis isn't for you is not trivial.
The speed-dating approach has a clear economic argument. But how would it work in practice? Who's offering this?
Nobody is offering it as a packaged service right now, which is kind of remarkable. But the components exist. There are platforms like Alma and Headway that aggregate therapists and handle the insurance billing. There are directories like Psychology Today where you can filter by modality. What's missing is the structured program — the curated sequence of short engagements with a clear endpoint of making a decision.
The obstacle I see is that therapists might hate it. If you're a CBT practitioner and you know the patient is going to do two sessions and then try someone else, are you going to invest the same energy? Are you going to structure those two sessions differently than you would the opening of a longer engagement?
That's a real concern. The therapeutic alliance takes time to build, and the first couple of sessions often involve a lot of assessment and history-taking. You might not get a representative taste of what the modality actually feels like. So the speed-dating model would require therapists to adapt — to design a compressed, experiential introduction to their approach that gives the patient a meaningful sample in a short time.
Which is actually an interesting design challenge. If you had to give someone a two-session introduction to ACT, what would you include? You'd probably skip the comprehensive history and jump straight into a core exercise — maybe values clarification or a defusion technique. Let them experience the method, not just hear about it.
And I think some therapists would be game for this. There's a growing interest in single-session therapy and brief interventions, partly driven by the research showing that a surprising amount of therapeutic change happens in the first few sessions. The dose-response curve in psychotherapy is not linear — you get a lot of benefit early, and then diminishing returns. So a well-designed two-session experience could actually be therapeutic in its own right, not just diagnostic.
Let's pull on that thread. If the first few sessions produce disproportionate benefit, then the speed-dating model isn't just a matching tool — it's also delivering real value regardless of whether the patient continues. Even if they don't pick that modality, they got something from the exposure.
There's a study by Michael Lambert and colleagues that found about thirty to forty percent of patients show clinically significant improvement within the first three sessions. Now, some of that is probably regression to the mean or natural recovery, but not all of it. So you're right — the sampling process itself is therapeutic. It's not wasted time or money.
Which changes the framing. Instead of "try a bunch of things and hope one sticks," it's "get a series of beneficial micro-interventions while you figure out what fits." That's a much easier sell.
It aligns with something Daniel said that I think is really important — the idea that therapy is work. It's not a passive experience where the therapist does something to you. You have to show up, be honest, do the exercises, sit with discomfort. Different modalities demand different kinds of work. CBT asks you to track thoughts and challenge distortions. ACT asks you to practice acceptance and clarify values. Psychodynamic therapy asks you to free-associate and examine the therapeutic relationship itself. Those are different muscles. Part of matching is figuring out which kind of work you're actually willing to do.
The matching problem isn't just "what will be effective for my condition." It's "what kind of effort am I capable of sustaining, and what kind of effort feels meaningful rather than aversive?" Some people hate filling out thought records. Other people find them clarifying and grounding. The same intervention lands differently depending on the person.
This is where I think AI could actually add something distinctive. Not by replacing the therapist or the human judgment, but by helping patients articulate their own preferences and patterns before they ever walk into a session. Daniel mentioned the idea of a short interview that assesses thought patterns. Even if we can't do that with clinical precision yet, we can do something useful. There are tools now — and I want to be careful about overclaiming here — that use structured conversation to help people reflect on their own mental habits. Things like, do you tend to ruminate? Do you avoid difficult emotions or dive into them? Do you respond well to structure or does it feel constricting?
It's not diagnosing, it's eliciting self-knowledge that the person might not have articulated before. It's a structured reflection tool that produces a clearer picture of what you're bringing to the table.
And that picture is useful whether or not there's an algorithm on the back end. You could take that self-knowledge to any therapist directory and make a smarter choice. The tool doesn't have to say "you need ACT." It can say "here's what you've told us about your patterns and preferences — people with similar profiles have tended to respond well to these approaches, and here's why.
I like that because it sidesteps the black-box problem. You're not saying "the algorithm decided." You're saying "based on what you told us, here's some information that might help you decide." The agency stays with the patient.
That matters ethically. Mental health decisions are high-stakes, and people are rightly suspicious of opaque AI recommendations in this space. Transparency isn't just a nice-to-have — it's essential for trust. If the system says "we're recommending interpersonal therapy because you scored high on attachment anxiety and IPT has a strong evidence base for relationship-focused distress," the patient can evaluate that reasoning. They can say, actually, I don't think attachment is my main issue. And the system can adapt.
Let's sketch what the beautiful system could look like, even if it's not fully buildable today. Daniel asked us to brainstorm, so let's actually do it.
I'd start with a three-part intake. Part one is standardized measures — the PHQ-9 for depression, the GAD-7 for anxiety, maybe the WHODAS for functional impairment. These give you a baseline and allow you to track outcomes later. Part two is a structured interview, probably text-based or voice-based, that explores the person's history, their goals, their preferences for how they like to work. Part three is a brief experiential sampling — maybe short audio or video vignettes that demonstrate what different modalities actually look like in practice, and the person rates their reactions. Do they find the CBT exercise appealing or off-putting? Does the ACT metaphor resonate or feel abstract?
The vignette piece is smart. Most people have never seen therapy from the inside, so they're choosing based on descriptions that don't convey what the experience is actually like. Showing instead of telling.
And we have the technology to do this well. You could record actual therapy sessions — with consent, anonymized — and use short clips to illustrate the feel of each approach. A two-minute clip of a cognitive restructuring exercise versus a two-minute clip of an empty-chair Gestalt technique. The difference is visceral in a way that a written description can't capture.
After the intake and the vignettes, the system produces a report. Not a single recommendation, but a ranked list with explanations. "Based on your profile, we think ACT is worth exploring first because of X, Y, and Z. Interpersonal therapy is a strong second option. Here's what each would involve, here's what the evidence says for your presenting concerns, and here are providers in your area who offer these modalities and have availability.
Then — this is the part I think is crucial — you don't just get a referral and a handoff. The system follows up. After your first few sessions with the recommended provider, you do a brief check-in. How's it going? Is the work feeling meaningful? Are you experiencing early benefits? If the answer is no, the system can help you pivot — maybe to the second option on your list, or back to the drawing board.
It's not just a matching tool, it's a navigation service that stays with you through the process. That addresses the problem Daniel identified, which is that people often bounce from one provider to another with no continuity and no learning. Each false start is a fresh beginning with no memory of what came before.
This is where the financial argument gets really sharp. If we could reduce the number of false starts, even by a little, the cost savings would be significant. Not just for the patient, but for the system as a whole. Therapy dropout rates are high — some studies put them at twenty to fifty percent, depending on the setting. Many of those dropouts are people who weren't well-matched in the first place. If better matching reduces dropout by even ten percentage points, that's a huge number of people who stay engaged and get better.
Let's talk about the human Sherpa version for a second. Daniel mentioned the idea of someone whose entire job is to provide recommended pathways. Does this role exist anywhere?
Some large group practices have intake coordinators who do a version of this — they talk to new patients, assess their needs, and assign them to a therapist. But those coordinators are usually working within a limited roster of available clinicians, and they're often not deeply knowledgeable about the full range of modalities. They're matching based on availability and broad categories — "you want someone who does anxiety" — not based on a nuanced understanding of what different approaches entail.
It's a logistics role, not a clinical navigation role.
The closest thing to a true therapy Sherpa might be a good primary care physician who knows the mental health landscape in their community. But that's rare, and it's a lot to ask of a PCP who has fifteen minutes per appointment and a hundred other things to manage.
The Sherpa role doesn't really exist in a systematic way. Which means most people are navigating this alone, with Google and word of mouth and whatever their insurance portal shows them. It's not surprising that the process is arduous.
There's a stigma piece here too. People are often reluctant to shop around for therapists the way they'd shop around for a dentist or a mechanic. There's a sense that you should just commit and do the work, and if it's not working, maybe you're not trying hard enough. That's a terrible frame, but it's pervasive.
Daniel touched on this when he said patients are expected to have magically done the research already. The system assumes an informed consumer, but it doesn't provide the information or the structure to become informed. It's a market failure.
What do we actually tell someone who's listening right now and wants to make this process less arduous today, with the tools that exist?
I think the first thing is to treat the initial search as a research project, not a commitment. Before you even book a session, spend some time learning about the modalities. The What Therapy site we mentioned is a good starting point. Read about CBT, ACT, DBT, interpersonal therapy, psychodynamic therapy. See which descriptions resonate. Pay attention to what kind of work each one asks of you — is it structured homework, or open-ended exploration, or skills practice, or relational focus?
Then, when you do book initial consultations — and most therapists offer a free fifteen-minute call — use that time well. Don't just ask about logistics. Ask them to describe what a typical session looks like. Ask what the work will feel like. Ask how they'll know if it's working. Their answers will tell you a lot about whether their approach fits your expectations.
The speed-dating idea is something you can actually implement yourself, even without a formal program. Book two or three sessions with different therapists who practice different modalities. Tell them upfront that you're exploring options and want to get a feel for their approach. Some will be open to this, some won't — and their reaction is itself useful information.
If the cost is a barrier — which it is for many people — look at lower-cost options for the exploration phase. Some training clinics offer reduced-fee sessions with therapists in training. Online platforms sometimes have introductory offers. The goal in this phase isn't deep therapeutic work, it's sampling. You can do that more cheaply than committing to a full course of treatment.
There's also a case for starting with a structured, evidence-based approach like CBT as a default, not because it's the best fit for everyone, but because it's the most widely available and has the strongest evidence base across a range of conditions. If it works, great. If it doesn't, you've learned something — you now know that a more structured, present-focused approach didn't click, and you can look at alternatives with that knowledge in hand.
I'd add a nuance to that. CBT is a big tent. There's traditional Beckian CBT with thought records and behavioral activation. There's more cognitive-focused work. There's more behavioral work. Even within the CBT label, there's variation. So if one CBT experience didn't land, it might be worth trying a different CBT therapist before concluding the modality isn't for you. The therapist matters as much as the therapy.
Which brings us back to the three-dimensional matching problem. Patient, therapy, therapist. Any system that ignores one of those dimensions is going to be incomplete.
That's why I'm cautiously optimistic about AI in this space, but deeply skeptical of anything that promises to replace human judgment entirely. The sweet spot is augmentation — giving patients and clinicians better information, surfacing options that might not have been considered, tracking outcomes so the system learns over time. But the final decision should be human.
Daniel made this point in his prompt — AI has to be used thoughtfully, always as an adjunct to human-led treatment rather than a replacement. I think that's exactly right, and it's also where a lot of the tech hype goes wrong. The pitch isn't "AI will find your perfect therapist." It's "AI can help you ask better questions and see options you didn't know existed.
There's a parallel here to what's happening in other areas of medicine. Genetic testing can now tell you which antidepressants you're more likely to metabolize well — pharmacogenomic testing. It doesn't tell you which medication will work, but it narrows the field and reduces the trial-and-error burden. That's the model. Not a crystal ball, but a filter that makes the search more efficient.
The filter doesn't have to be perfect to be valuable. If it improves the odds of a good match from, say, fifty-fifty to seventy-thirty, that's a meaningful improvement that translates into real human benefit — less suffering, less money wasted, less time spent in the wrong room with the wrong approach.
I want to circle back to something Daniel said about the financial impact. He mentioned that even in the best healthcare systems, there are glaring gaps in coverage. And he's right. In the UK, the NHS offers CBT through the Improving Access to Psychological Therapies program, which is great, but the waiting lists can be months long, and the range of modalities is limited. In the US, even with good insurance, mental health coverage often has higher copays, stricter session limits, and narrower networks than physical health coverage. Parity laws exist on paper, but enforcement is spotty.
The matching problem is compounded by the access problem. Even if you figure out what you need, you might not be able to get it. That makes the trial-and-error approach even more punishing — every wrong turn costs money and time you might not have.
Which is why I think Daniel's instinct about online-delivered therapy having a role here is sound. Text-based therapy, video therapy, even guided self-help programs — these can be lower-cost and more accessible than traditional in-person sessions. If you're sampling modalities, doing it online reduces the friction. You can try ACT-informed guided self-help for a fraction of the cost of in-person ACT, and if it resonates, then you can seek out a full-fidelity in-person provider.
The stepped-care model. Start with the least intensive, least expensive option that might work, and escalate only if needed. It makes sense for physical health, and it makes sense for mental health too. But it requires a system that's designed to support those transitions, and right now, we mostly don't have that.
Alright, let's try to synthesize. Daniel asked what tools are available today, and what we'd build if we could. On the available side: What Therapy for modality education, Lyssn for quality assurance behind the scenes, Spring Health if you're lucky enough to have it through your employer, and the old-fashioned method of doing your own research and booking consultations with clear intentions. It's not elegant, but it's better than going in blind.
On the build side: a three-part intake with measures, structured interview, and experiential vignettes. A transparent recommendation engine that explains its reasoning. Integration with provider directories and availability data. Follow-up check-ins to track fit and facilitate pivots. And ideally, a network of therapists who are trained to offer compressed introductory experiences — two or three sessions designed to give a genuine taste of the modality.
I'd add one more piece. Any system like this should contribute to a learning dataset — anonymized, consented, carefully governed — so that over time, the matching gets better. We need to know not just what the evidence says about modalities in general, but what actually works for which kinds of people in real-world settings. That data barely exists right now. Every mismatched patient who drops out is a data point that's lost.
That's the long game. The beautiful system Daniel's imagining isn't just a tool — it's infrastructure. It requires coordination across providers, payers, platforms, and researchers. It requires standards for data sharing and outcome measurement that don't currently exist. It's a heavy lift.
The pieces are there. The NLP tools exist. The outcome prediction models are getting better. The consumer demand is clearly there — people are desperate for guidance and tired of fumbling in the dark. And the economic case is strong, because better matching means better outcomes at lower total cost. Somebody's going to build this. The question is whether it gets built thoughtfully, with the patient's agency at the center, or whether it gets built as a black-box optimization engine that treats mental health like ad targeting.
That's the tension. The tech can serve empowerment or it can serve extraction. Daniel's framing — AI as adjunct, not replacement — points toward the empowerment version. But the incentives in healthcare tech don't always align with that.
No, they don't. And that's why conversations like this matter. The people building these systems need to hear from patients and clinicians about what actually helps and what doesn't. Daniel's vision of a therapy Sherpa — someone whose entire job is wayfinding, not treatment — is a human-centered design principle, whether the Sherpa ends up being a person, a tool, or a combination of both.
I think the Sherpa metaphor is worth sitting with. A Sherpa doesn't climb the mountain for you. They know the terrain, they've guided others up similar paths, they can tell you which routes are too dangerous and which ones might suit your abilities. But you're still doing the climbing. The work is still yours.
That's therapy, isn't it? The therapist can guide, challenge, reflect, teach skills — but the patient is the one doing the work. The Sherpa just makes sure you're on a mountain worth climbing, with a route that's appropriate for where you are.
Daniel, if you're listening — I think the short answer is that the beautiful system doesn't fully exist yet, but the components are being assembled, and there are things you can do today that are better than picking a name off a list. The longer answer is that building this well is going to require holding the tension between algorithmic efficiency and human agency, and that's not a tension that resolves easily.
If anyone listening is working on this — building a matching tool, designing a navigation service, training therapists to offer introductory sampling sessions — we'd love to hear about it. The field needs more people thinking about this problem from the patient's perspective.
Now: Hilbert's daily fun fact.
Hilbert: The mantis shrimp has twelve types of photoreceptor cells in its eyes. Humans have three. It can see ultraviolet light, infrared light, and polarized light in ways we cannot even begin to imagine. Also, it punches its prey with the acceleration of a twenty-two caliber bullet, creating a cavitation bubble that briefly reaches the temperature of the surface of the sun.
a lot of shrimp.
I'm going to be thinking about that for the rest of the day.
Here's what I'm left with. The therapy matching problem is really a knowledge problem — patients don't know what's available, and the system doesn't know enough about patients to guide them well. Closing that gap is doable. It doesn't require magic. It requires better intake tools, better education, and a willingness to treat the search process as part of the care, not an obstacle to it. That feels like a solvable problem, and that's encouraging.
And Daniel's instinct to detach matching from treatment provision is, I think, the key insight. When the person recommending the approach is the same person who'll be paid to deliver it, there's an inherent conflict. Separating those functions — whether through a human Sherpa or a well-designed tool — is the structural change that makes everything else possible.
Thanks to our producer Hilbert Flumingtop for keeping this show running, and to Daniel for the prompt that got us here. This has been My Weird Prompts. You can find every episode at myweirdprompts dot com. We'll be back soon.
Take care, everyone.