Daniel sent us this prompt about something that doesn't get nearly enough attention — what happens after gallbladder surgery when things don't go back to normal. He's been through it himself, and he's asking about something bigger than just his own experience. He wants to talk about how people crowd-source their way toward answers when the medical system leaves them hanging. Platforms, communities, the whole phenomenon of patients connecting across continents to figure out what their own doctors couldn't.
This is one of those topics where the numbers tell a story that most people never hear. Gallbladder removal, cholecystectomy, is one of the most common surgeries in the developed world. Something like seven hundred thousand of these procedures a year in the United States alone. The standard line patients get is, you'll be fine, you don't need your gallbladder, your body will adapt. And for most people, that's true. But post-cholecystectomy syndrome — that's the clinical term — the prevalence estimates in the literature range from about five to forty percent depending on how you define it and how long you follow patients. Even at the low end, five percent of seven hundred thousand is thirty-five thousand people a year in the US alone who are walking around with persistent symptoms.
Five to forty percent is quite a range.
It's enormous, and that range itself is telling. The lower estimates tend to come from surgical follow-up studies where they ask about pain at the incision site and whether you can eat a normal meal. The higher estimates come from gastroenterology clinics where people show up months or years later with bloating, diarrhea, urgency, fat intolerance. Different populations, different questions, wildly different answers. But even if you split the difference, you're talking about a lot of people.
The first thing most of them hear is some version of give it time.
Which isn't wrong exactly — the body does adapt, the bile duct dilates to take over some of the gallbladder's storage function, the gut microbiome shifts. But the timeline doctors quote is usually weeks to months, and for the people Daniel's talking about, it's been years. At that point, give it time stops being medical advice and starts being a way to end the conversation.
The medical equivalent of the check engine light coming on and the mechanic saying, just drive it for a while, see what happens.
Here's where the crowd-sourcing part gets interesting. Because what Daniel described — going to Facebook groups, finding communities of people with the exact same cluster of symptoms — that's not just emotional support. It's a distributed diagnostic process. People compare notes on what they've tried, what made things worse, what helped a little. Patterns emerge that no single gastroenterologist would ever see because no single gastroenterologist has a thousand patients with post-cholecystectomy syndrome.
Let's talk about those platforms. Daniel asked specifically about what's out there, what works, what doesn't. You've looked into this.
The landscape breaks down into a few categories. You've got the general-purpose health forums — PatientsLikeMe, Inspire, HealthUnlocked. PatientsLikeMe is particularly interesting because it was founded in two thousand four by the brothers of a man with ALS, and from the beginning it was built around the idea that patients sharing structured data could accelerate research in ways traditional medicine couldn't. They've published something like a hundred peer-reviewed papers using patient-reported data. They're not just a support group with a comment section — they're essentially running ongoing observational studies.
Structured data — you mean people aren't just posting I feel terrible today, what do I do.
PatientsLikeMe asks you to log specific symptoms, treatments, dosages, outcomes, over time. They visualize it. You can see how your disease trajectory compares to others with the same condition. It's quantified self meets support group. The downside is that for something like post-cholecystectomy syndrome, the community is relatively small compared to, say, multiple sclerosis or ALS, which were the early focus.
The Facebook groups Daniel mentioned — those are the opposite end of the spectrum, I imagine. Unstructured, high volume, lots of noise.
With one massive advantage, which is scale and discoverability. If you type gallbladder problems after surgery into Google, the Facebook groups show up. They're the front door for a lot of people. The one Daniel referenced — and I know the one he means, it's got a name that's hard to forget — has tens of thousands of members. That's tens of thousands of people who are sufficiently motivated by their post-surgical experience to seek out a community. The content is a mix of horror stories, practical tips, supplement recommendations, and a fair amount of what I'd call folk medicine.
Which is where your ears probably perk up, as a former pediatrician.
It's where I get cautious. Because for every person in those groups saying, I started taking ox bile supplements and it changed my life, there's someone else saying, I tried the exact same thing and it made everything worse. The problem is that post-cholecystectomy syndrome isn't one condition. It's an umbrella term for a bunch of different things that can go wrong after the gallbladder comes out. Bile acid malabsorption, sphincter of Oddi dysfunction, small intestinal bacterial overgrowth, accelerated gastric emptying, fat malabsorption — these are different mechanisms requiring different interventions. The Facebook group doesn't triage you by mechanism. It gives you a firehose of remedies and you're the one who has to figure out which ones apply.
It's less crowd-sourced medicine and more crowd-sourced trial and error.
Trial and error with your digestive system is a high-stakes game. But here's the part I don't want to dismiss — and I think Daniel's right about this — even the trial and error has value when the alternative is nothing. If your gastroenterologist has run the standard tests, found nothing structurally wrong, and told you to manage your symptoms, then the Facebook group is at least offering hypotheses. Some of those hypotheses are going to be wrong, some are going to be actively harmful, but some are going to point toward things like bile acid sequestrants or digestive enzymes that a motivated patient can then discuss with an actual doctor.
The doctor becomes the validator rather than the discoverer.
And that flips the traditional model entirely. In the traditional model, the doctor generates the hypotheses and the patient complies. In this model, the patient community generates the hypotheses and the doctor serves as a safety filter and prescribing gatekeeper. It's uncomfortable for a lot of physicians, and I understand why — there's a reason medical training takes a decade. But for conditions where the standard diagnostic pathway dead-ends, it's what patients are doing anyway.
Daniel also mentioned YouTube as a platform for this. He put up a couple of videos about his own post-surgery experience, and he gets emails from strangers around the world who found those videos and recognized themselves in them.
YouTube is fascinating as a health platform because it's search-driven in a way that Facebook isn't. On Facebook, you find the group and then you scroll through whatever the algorithm serves you. On YouTube, people are actively searching for phrases like life after gallbladder removal or why am I still bloated after gallbladder surgery. They're in problem-solving mode. And when they find someone who describes their exact symptom cluster — the bloating that starts an hour after eating, the unpredictable urgency, the intolerance of foods that were fine before surgery — that recognition is powerful. It's not just informational, it's validating. Someone else has this. I'm not crazy.
The I'm not crazy part is probably underrated as a therapeutic outcome.
It's enormous. One of the things that comes up again and again in the literature on medically unexplained symptoms — and post-cholecystectomy syndrome sometimes falls into that category when the standard workup is negative — is that patients experience a kind of double burden. They have the physical symptoms, and then they have the social and psychological burden of having symptoms that the medical system can't explain or won't take seriously. Finding a community that says, no, this is real, we all have it, here's what we know — that addresses the second burden even if it doesn't fix the first.
The platforms serve at least three functions. Validation, which you just described. Hypothesis generation, which we talked about with the trial-and-error piece. And then there's something else — what Daniel called potential value in crowd-sourcing toward diagnosis and cure. Is there actual evidence that patient communities have solved medical mysteries that the system couldn't?
There are some well-documented cases. The most famous is probably the ALS ice bucket challenge, which wasn't a diagnostic effort but was a patient-community-driven research funding phenomenon that led to the discovery of the NEK1 gene's role in ALS. More directly relevant, there's a platform called RareConnect, which is specifically for rare disease patients. It's run by EURORDIS, the European rare disease organization. They've documented multiple cases where patients in different countries connected through the platform, realized they had the same undiagnosed symptom cluster, and collectively pushed for whole-exome sequencing that identified novel genetic conditions.
Rare diseases are almost the ideal case for this, because any single doctor might see one case in a career.
The rarer the condition, the more valuable the global network becomes. But what's interesting about post-cholecystectomy syndrome is that it's not rare — the surgery is incredibly common — but the persistent-symptom minority is distributed across so many different gastroenterology practices that no single practice builds up enough cases to do meaningful pattern recognition. So it's functionally rare even though it's numerically common. The patient community aggregates those dispersed cases.
Like a distributed telescope array. Each mirror is small, but point them all at the same patch of sky and you get resolution you couldn't achieve otherwise.
That's exactly the right analogy. And the resolution improves when the data is structured. This is where I think the next generation of platforms gets interesting. There's a company called StuffThatWorks that's built specifically around crowd-sourced treatment effectiveness data for chronic conditions. They ask structured questions — what have you tried, how well did it work, what side effects did you have — and they aggregate the responses statistically. For post-cholecystectomy syndrome specifically, their data shows that the most commonly reported effective treatments are bile acid sequestrants, digestive enzymes, and low-fat diets, in that order. But the effectiveness rates vary wildly — cholestyramine, which is a bile acid binder, works for about sixty percent of the people who try it, which means forty percent get no benefit. That kind of granular, probabilistic data is something you'd never get from a Facebook comment thread.
Sixty-forty is interesting — it's effective enough to be clearly doing something real, but the failure rate is high enough to tell you that there are multiple underlying mechanisms. If it were one condition, you'd expect a tighter response rate.
That's the kind of insight that emerges from aggregated patient data. And it's the kind of thing that could, in theory, guide clinical trials. If you know that bile acid sequestrants work for sixty percent of patients, you can start asking what distinguishes the responders from the non-responders. Is it the presence of bile acid diarrhea specifically? Is it something about the gut microbiome? Those are answerable research questions, but they only get asked if someone notices the pattern.
We've got platforms ranging from the highly unstructured — Facebook groups, YouTube comments — to the semi-structured like PatientsLikeMe and StuffThatWorks, to the research-oriented like RareConnect. Are there downsides to any of this that we're not talking about?
The first and most obvious is quality control. On any open platform, the most confident voice isn't necessarily the most accurate. You get people recommending things like liver flushes or detox protocols that range from useless to dangerous. In a Facebook group, the person who posts most frequently can become a de facto authority regardless of whether they know what they're talking about.
The influencer-ification of medical advice.
It's particularly problematic with chronic digestive conditions because the symptoms fluctuate naturally. Someone tries a supplement, has a good week, and attributes the improvement to the supplement. That's how you get folk remedies that seem to work for some people — it's regression to the mean dressed up as treatment effect.
The other downside I'd flag is the emotional contagion piece. Daniel mentioned the group called Gallbladder Surgery Ruined My Life and said it was a bit grim. If you're three months post-surgery, still having symptoms, and you join a group where the dominant narrative is that life is permanently ruined, that's not great for your mental health or probably for your physical recovery.
There's research on this — not specific to gallbladder groups, but on chronic illness communities generally. The emotional tone of the community affects outcomes. Communities that are high in validation and practical problem-solving tend to be associated with better coping and even better symptom management. Communities that are high in catastrophizing and medical mistrust tend to be associated with worse outcomes. The tricky part is that the same condition can produce both types of communities, and a new patient has no way of knowing which one they're walking into.
Daniel seemed aware of that tension. He said he had mixed feelings and recommended those groups more for mental health than for medical advice.
Which is a nuanced take. The mental health value is real — knowing you're not alone, knowing your experience is shared, that's genuinely therapeutic. The medical advice value is much more variable and depends entirely on the specific community and the specific advice.
Let's talk about the doctor side of this for a moment. You're a retired pediatrician. If a patient came to you and said, I've been researching my symptoms in an online community and I think I might have bile acid malabsorption and I want to try cholestyramine — how would you have responded?
It depends enormously on how they presented it. If they came in with printouts from a Facebook group and a confrontational attitude, that's a hard conversation. If they came in and said, I've been experiencing these specific symptoms, I connected with other people who had the same surgery, several of them said this medication helped them, what do you think — that's a completely reasonable conversation. I'd want to know the specifics of their symptoms, I'd want to rule out other causes, and if the clinical picture fit, I'd be open to a trial of the medication.
The difference being the patient's posture toward the doctor's expertise.
The responsibility goes both ways. Doctors need to be open to the possibility that a motivated patient who's been living with a condition for years and has compared notes with hundreds of other patients might know something they don't. The humility to say, I haven't seen this presentation before, but let's work through it together — that's what patients are looking for and too often not getting.
There's a structural problem here too, though. The fifteen-minute primary care appointment is not built for that kind of collaborative investigation.
It absolutely isn't. And gastroenterology referrals can take months. So the patient has a chronic, daily problem, the system moves slowly, and the online community is available right now. It's not surprising that people turn to the community first and the doctor second.
What would the ideal platform look like, if you were designing it from scratch?
I'd want something that combines the discoverability and emotional support of the Facebook groups with the structured data collection of PatientsLikeMe and the statistical rigor of StuffThatWorks. You'd have disease-specific communities where patients log their symptoms longitudinally, track interventions and outcomes, and the platform surfaces patterns automatically. You'd have some light medical moderation — not to censor, but to flag things like, this intervention has a high reported adverse event rate, or, the evidence for this claim is weak, here's what we actually know.
Ideally, integration with the clinical system so that the data patients are generating actually feeds back into research and practice.
That's the holy grail, and it's starting to happen in some areas. The FDA's patient-focused drug development initiative is explicitly incorporating patient experience data into the regulatory process. The National Institutes of Health's All of Us research program is collecting patient-reported outcomes at massive scale. The infrastructure is being built. The challenge is connecting it to the grassroots communities where patients are actually having these conversations.
Daniel mentioned that he gets emails from people who found his YouTube videos, and he hates to disappoint them because he never really achieved much progress himself. That seems like an important part of this story too — the person who becomes a node in the network without necessarily having solved their own problem.
Most of the people who become informal patient advocates or community moderators or YouTube explainers are still struggling themselves. They're not cured people dispensing wisdom from the other side. They're in the thick of it, sharing what they've learned so far. And that's actually part of what makes them credible to other patients — they're not selling a success story, they're documenting a journey.
The honesty of we don't know yet is powerful.
It's something the medical system struggles to say. Doctors are trained to project confidence. Patients want answers. There's a mutual pressure toward certainty that often produces premature closure — here's your diagnosis, here's your treatment, next patient. The online communities are messier, but they're more honest about the uncertainty.
Where does this leave someone who's listening right now and thinking, I'm three months post-surgery, I'm still having symptoms, my doctor says give it time, what should I actually do?
I'd say a few things. First, give it time is reasonable up to a point — six months is a more realistic window for the body to adapt than the six weeks many surgeons quote. If you're past six months and still having significant symptoms, it's reasonable to push for a gastroenterology referral. Second, if you're going to explore online communities, go in with your eyes open. The Facebook groups can be valuable for validation and for generating hypotheses, but treat the medical advice as hypotheses to test with your doctor, not as prescriptions to follow. Third, if you want something more structured, PatientsLikeMe and StuffThatWorks are better starting points because the data is organized and you can see aggregate treatment outcomes rather than individual anecdotes.
Fourth, document your own symptoms. Keep a food and symptom journal. The more structured data you bring to your doctor, the harder it is to dismiss.
That's excellent advice. A patient who walks in and says, I've tracked my symptoms for eight weeks, here's the pattern, here's what correlates with bloating, here's what correlates with urgency — that patient is going to get a very different conversation than someone who says, I just don't feel right.
It also helps you become your own first-pass analyst. You might notice patterns that no doctor would have time to extract from a fifteen-minute conversation.
That's the broader point about crowd-sourced healthcare. It's not that the crowd replaces the doctor. It's that the crowd does the pattern-recognition work that the current system isn't structured to do, and then the doctor validates, refines, and prescribes. It's a division of labor. The patients have the data volume and the lived experience. The doctors have the training to interpret the data and the authority to act on it. The platforms are the connective tissue.
It's almost like the medical system needs an API.
That's exactly what it needs. A way for patient-generated data to plug into clinical workflows without requiring the doctor to read through a hundred Facebook posts. And that's starting to happen — Apple Health Records, patient portals that accept uploaded data, platforms that generate clinician-facing summaries. It's early, and the adoption is uneven, but the direction is clear.
Daniel's prompt also touched on something about language and geography — the idea that somebody on the other end of the world might speak your language and be able to help. That global dimension seems under-explored.
It's hugely important, especially for conditions that are rare or have rare subtypes. If you're in Ireland, where Daniel's originally from, and you're one of a few hundred people in the country with a particular post-surgical complication, the local support group might be three people. But if you can connect with someone in Australia or Canada or the United States who has the exact same thing, suddenly you're not alone. The internet made that possible in a way that was unimaginable a generation ago.
There's also a cultural dimension. Different countries have different medical traditions, different supplement regulations, different approaches to digestive health. German doctors, for example, tend to be more open to herbal and enzyme-based treatments than American doctors. A global patient community exposes you to treatment approaches that might not be on the radar in your own country's medical system.
That's a double-edged sword — some of those approaches are legitimate and evidence-based, some are not. But the exposure itself is valuable, as long as you're filtering it through a skeptical lens and discussing it with a doctor who's willing to engage.
Before we wrap the substantive part of this, I want to come back to something Daniel mentioned that I think deserves more attention. He said he put up YouTube videos hoping to share a journey toward progress, though he never really achieved much progress. And yet people still find those videos and reach out. There's something about the act of sharing the unresolved journey that creates value for others, even when the sharer hasn't found their own resolution.
It's a form of what the writer and podcaster John Green calls community of the unwell — people who are in the same boat, rowing in roughly the same direction, none of them having reached the shore yet. The value isn't in having arrived, it's in having an honest account of what the journey looks like. For someone who's newly post-surgery and scared and confused, finding someone who's two years in and says, here's what I've tried, here's what helped, here's what didn't, here's what I still don't know — that's more useful than a medical pamphlet that says most patients recover fully within six weeks.
Because the pamphlet doesn't acknowledge the existence of the minority who don't.
And when you're in that minority, being told that you don't exist is almost worse than the symptoms.
The gaslighting of the statistical norm.
That's a strong word, but I think it's fair in some cases. Not intentional gaslighting — most doctors aren't trying to dismiss patients — but structural gaslighting, where the system is built around the typical case and the atypical case gets processed as if it doesn't exist. The online communities are where the atypical cases find each other and say, no, we're real, this is happening.
To pull this together — and this is really the answer to the core of what Daniel was asking — the value of crowd-sourced healthcare for chronic post-surgical conditions seems to operate on three levels. Level one is validation and community, which addresses the psychological burden and the isolation. Level two is hypothesis generation, where the community surfaces potential mechanisms and treatments that an individual patient can then explore with their doctor. Level three is aggregate data and pattern recognition, where structured platforms can identify treatment response rates and subpopulations in ways that inform both individual care and future research.
That's a good framework. And I'd add that the three levels aren't mutually exclusive — a single platform can serve all three to varying degrees. The challenge for someone navigating this space is knowing which level they're engaging with at any given moment. Am I here for emotional support right now, or am I here to generate treatment hypotheses? Because the standards of evidence are different for those two things.
The risks are different too. Emotional support is relatively low-risk. Treatment hypothesis generation carries real medical risk if you act on it without professional oversight.
Which is why the ideal patient posture in these communities is curious but cautious. Take the hypotheses seriously enough to investigate them, but not so seriously that you bypass medical supervision. And if your doctor won't engage with the hypotheses at all — if the response to I found this community and they suggested this mechanism is simply that's the internet, don't believe everything you read — that's a signal that you might need a different doctor.
The doctor who can say, that's interesting, let me look into that, or, here's why that probably doesn't apply in your case — that's the doctor you want.
They do exist. I know plenty of gastroenterologists who are perfectly happy to engage with well-informed patients who bring specific, reasonable questions. The key is being well-informed and specific, not just anxious and armed with a hundred Facebook screenshots.
Which brings us back to the structured data point. The better you document your own experience, the more credible your hypotheses become.
It's the difference between saying, people online said this might work, and saying, I've tracked my symptoms for three months, I noticed that fatty meals consistently trigger bloating within ninety minutes, I connected with others who have the same pattern, and several of them reported improvement with digestive enzymes — would that be worth discussing? Same underlying idea, completely different quality of conversation.
And now: Hilbert's daily fun fact.
Hilbert: In the seventeen twenties, naturalists in the Gobi Desert documented that the dragline silk of a particular orb-weaving spider, when stretched and released, produces an audible high-frequency click — a mechanical acoustic signature caused by the rapid recoil of the silk's protein crystalline structures snapping back into alignment after deformation.
...right.
Spider silk is basically a tiny sound effect.
One more thing to worry about in the desert, I suppose. Spiders that click at you.
This has been My Weird Prompts. Thank you to our producer Hilbert Flumingtop. If you want more episodes, you can find us at myweirdprompts dot com or on Spotify. Leave us a review if you're enjoying the show — it helps other people find us.
Until next time.