Daniel sent us this one, and it's a big one — he says the job market is fundamentally broken across traditional employment, freelancing, contracting, the whole thing. Remote work opened doors but half the listings tagged remote actually aren't. Both sides are using AI poorly — candidates spam applications, companies filter with AI agents, and the whole system frustrates everyone. His core question is, what if a platform let both sides express desires — what they'd love to do, who they'd love to hire — and used AI as an intermediate layer to surface matches instead of spamming and filtering? And he's asking, if we built this thing ourselves, how would it actually work technically?
Oh, this is the exact conversation I've been wanting to have. And before we dive in — fun fact, today's script is being written by DeepSeek V four Pro. So we've got a fresh set of circuits thinking through this with us.
Appreciate the assist, DeepSeek. Alright, let's get into it. The thing that jumps out at me right away is Daniel's distinction between needs-based and desire-based. He's basically saying the entire labor market is built on "I need money" meeting "I need a warm body in a chair" — and that's a terrible foundation for anything resembling good work.
It really is. And the numbers back up just how broken this has gotten. Greenhouse — the hiring platform — put out a report late last year showing the average job opening received two hundred and forty-two applications. That's nearly triple what it was back in twenty-seventeen. Each applicant has a zero point four percent chance of landing the job. You're basically buying a lottery ticket every time you hit submit.
Zero point four percent. That's not a job market, that's a scratch-off game with worse odds.
It gets worse. LinkedIn saw a forty-five percent spike in applications — they were hitting eleven thousand per minute in June of twenty twenty-five. Eleven thousand applications every minute. The applications-to-recruiter ratio hit five hundred to one. That's four times what it was four years prior. So recruiters are drowning, candidates are screaming into the void, and what does everyone do? They reach for more AI.
Which is the doom loop. Both sides automate, both sides get worse results, both sides double down on automation.
Greenhouse CEO Daniel Chait — great name by the way — he called it the "AI doom loop." Seventy-four percent of candidates admit to using AI in their job search. Ninety percent of hiring managers report seeing more low-effort, spammy AI applications. And here's the trust collapse — only eight percent of job seekers believe AI algorithms make hiring fairer.
You've got candidates who don't trust the system using AI to game it, and companies who do trust their own AI using it to filter out the gaming, and neither side thinks the other is operating in good faith. That's not a market, that's an arms race.
The ghost jobs make it worse. In June twenty twenty-five, about thirty percent of US job postings — that's two point two million roles — never resulted in a hire. Government was the worst offender at around sixty percent ghost jobs. Four in ten companies admitted to posting jobs they had no intention of filling. So candidates are sending applications into a black hole, getting no response, and thinking "maybe I need to send more applications.
Which makes the congestion worse. I was reading that Business Insider piece from last November — they interviewed Alvin Roth, the Nobel economist who designed matching systems for organ donors and school admissions. He said the forces that make it cheap to send more applications are working faster than the forces that allow you to quickly process many applications. That's the congestion problem. When sending an application costs nothing, you get infinite applications.
This is where the dating app parallel gets really interesting. Roth's work on market design shows that when a market gets too congested, you need signaling mechanisms. The rose on Hinge, the super like on Tinder — something that lets you say "no, I'm actually serious about this one.
Greenhouse built exactly that. They launched something called Dream Job in June twenty twenty-five. You get one designation per month. One application you can flag as "this is the one I actually want." That's it. You can still apply to other jobs, but you only get one signal of genuine desire.
The results are kind of stunning. Dream Job applicants advance four to five times faster through the hiring process. They get hired in about twenty and a half days, versus thirty-five to fifty days for regular applicants. Over fifteen hundred people have landed jobs through it. TIME named it one of the best inventions of twenty twenty-five.
The mechanism works. One signal of genuine intent outperforms a hundred generic applications. That tells you something fundamental about what's broken.
It tells you that desire is a better filter than keywords. But here's the thing — Dream Job is still bolted onto a traditional resume-and-job-description system. You're still uploading a PDF of your work history and matching against a list of required skills. Daniel's question goes further. What if the entire platform was built on desire from the ground up?
So instead of "I need a senior Python developer with five years of experience in fintech," you'd have a company saying "we have this gnarly data pipeline problem and we need someone who loves untangling messy infrastructure." And instead of a candidate saying "proficient in Python, SQL, and AWS," they'd say "I love taking chaotic systems and making them elegant. I get excited about edge cases.
The AI layer in between would have to do something much more interesting than keyword matching. It would need to understand latent desire — what people actually mean, not just what they say.
This is where it gets technically fascinating. Because people are terrible at articulating what they actually want. A candidate might say "I want to work in climate tech" but what they actually love is hard engineering problems with tight constraints, and climate happens to be where those problems live. A company might say "we need a marketing manager" but what they actually need is someone who loves building community from scratch.
You'd need a system that can infer underlying preferences from sparse signals. I think you'd want multiple layers. First layer — both sides create desire profiles instead of traditional resumes or job descriptions. The candidate writes "here's what I'd love to work on, here's the kind of problems that make me lose track of time, here's the kind of team I thrive in." The company writes "here's the problem we're trying to solve, here's the kind of person who would love working here, here's what makes this role meaningful.
You'd probably want to structure that with some prompts or frameworks so it's not just free-text rambling. Maybe something like — what's a project you've done that gave you the most energy? What's something you built that you're proud of, even if it wasn't for work? What kind of work makes you forget to eat lunch?
That second one is huge. The projects people build on weekends or between jobs — that's where you see actual desire. Not what they were paid to do, but what they chose to do.
The first layer is structured desire statements. The second layer would be the AI that processes those. And I think you'd want something closer to collaborative filtering than simple vector embeddings.
Explain that distinction.
Vector embeddings would take your desire statement and a job's desire statement and say "these are semantically similar, therefore match." But that's still just fancy keyword matching. Collaborative filtering is what Netflix uses — it looks at patterns across many users. People who loved this also loved that. In a job context, you'd train on outcomes. Who actually stayed in which role and reported high satisfaction six months or a year later? What patterns emerge from the people who genuinely thrived versus those who quit or got fired?
You'd need a feedback loop. The platform doesn't just match — it follows up. Six months after a hire, both sides rate how it's going. Not "would you recommend this candidate" but "are you doing work that energizes you" and "is this person bringing the energy you hoped for.
That feedback trains the model. Over time, the AI learns that people who say "I love fast-paced environments" and then thrive are actually saying something different from people who say "I love fast-paced environments" and then burn out in three months. It learns to detect the difference between aspirational self-description and actual preference.
This raises one of the hardest problems though. Can you fake desire as easily as you can fake a resume?
That's the adversarial question. And I think the answer is — at first, yes. People would absolutely try to game it. They'd write flowery desire statements for jobs they don't actually want, just to get attention. Companies would write inspiring mission statements for roles that are actually tedious data entry.
The feedback loop might catch that. If you express desire for fifty different kinds of roles, the system should flag that as low signal. Genuine desire tends to be specific and somewhat narrow. Nobody truly loves everything.
And companies that consistently get low satisfaction ratings from hires would see their desire statements get downweighted. If you say "we want someone who loves creative problem-solving" but every hire reports that they spend their days filling out compliance forms, the system learns that your desire statements don't match reality.
You'd need some kind of reputation mechanism. Both sides have skin in the game — not necessarily money, but reputational capital within the platform. Your desire statements have weight proportional to how well they've predicted actual satisfaction in the past.
Which is interesting because it inverts the current dynamic. Right now, candidates have no reputation — every application is a fresh start with zero context. Companies have reputations on Glassdoor, but they're totally disconnected from the hiring process. In a desire-based system, honesty becomes strategically valuable. The more accurately you express what you actually want, the better your matches.
Let me play out a potential failure mode though. What if this creates a two-tier system where people who are naturally good at articulating desire — who tend to be more educated, more privileged — get better matches, while people who struggle to express themselves get left behind?
That's a real concern. And I think you'd need to design around it. Maybe the desire statements aren't free-text at all — maybe they're structured choices. Instead of writing an essay about what you love, you're making a series of trade-off decisions. Would you rather work on a small team or a large team? Would you rather have clear direction or open-ended exploration? Would you rather solve known problems or discover new ones?
Forced-choice preference elicitation. That's actually more reliable than free text anyway, because it removes the performance aspect. You can't write a beautiful paragraph about your passion for synergy when the question is "do you prefer debugging or designing?
And the AI learns from your choices, not your self-description. Over time, it builds a preference model that might even tell you things about yourself you didn't know. "Based on your choices, you seem to thrive in roles with high autonomy and low structure, even though you've been applying to large corporate positions.
That's almost therapeutic. The platform as a mirror.
Which brings us to the company side. What would a company desire profile look like?
I think it has to be radically honest in a way most job descriptions aren't. Current job descriptions are marketing documents. They're designed to attract as many applicants as possible, which is exactly the wrong incentive in a congested market. A desire-based company profile should almost be designed to deter the wrong people.
"This role involves a lot of tedious regulatory paperwork. If you love meticulous detail work, you'll thrive. If you're looking for creative freedom, you'll hate it.
And that kind of honesty is rare because companies are terrified of narrowing their pipeline. But in a congested market, a narrow pipeline of high-fit candidates is better than a wide pipeline of randoms.
The math supports that. If you get two hundred and forty-two applications and hire one person, you're wasting enormous time filtering. If an honest desire profile gets you twenty applications and you hire one of them, your time-to-hire drops dramatically, your quality of hire probably goes up, and everyone suffers less.
There's a company called Jobright that launched an AI agent last year that tries to evaluate "mutual fit" — their language, not mine — and they claim it boosts interview rates about two times. But they're still operating on a skills-and-resume foundation. Nobody has built the full desire-based marketplace yet.
Let's actually design it. Daniel asked how we'd build this technically. I think you start with a structured preference elicitation engine for both sides. Candidates go through something like a twenty-minute interactive experience — trade-offs, scenario questions, project preference rankings. Companies do something similar — they describe the actual work, the actual team dynamics, the actual frustrations of the role.
The AI layer then has a few jobs. One, it builds preference vectors for both sides — not just "this person likes Python" but "this person thrives in environments with these properties." Two, it does the matching — but crucially, it doesn't just surface the highest similarity scores. It surfaces matches where the AI has high confidence that both sides would report high satisfaction six months later.
Which is a different optimization target entirely. Most job platforms optimize for "did the candidate get hired." That's a terrible metric. You want to optimize for "is everyone still happy a year later.
That's a much harder metric to gather, which is why nobody does it. You need a relationship with both sides that extends past the hire. Most platforms disappear the moment the offer letter is signed.
You'd build in check-ins. Thirty days, ninety days, six months, one year. Short pulse surveys. "Are you doing work that energizes you?" "Would you hire this person again?" Not performance reviews — satisfaction signals.
You'd need to make those check-ins feel lightweight, not like corporate surveillance. Maybe it's just a thumbs-up thumbs-down once a quarter. "Is this working out the way you hoped?
The third thing the AI layer does is learn from mismatches. When a hire goes badly, the system needs to understand why. Did the candidate misrepresent their preferences? Did the company misrepresent the role? Did something change after the hire? That diagnostic is where the model actually gets smarter.
Let me push on something though. You mentioned that companies might post ghost jobs even in a desire-based system. "We'd love to hire someone who..." but they have no budget, no headcount approval, it's just a fishing expedition. How do you prevent that?
Skin in the game. I don't think it has to be money — although a small listing fee would filter out a lot of nonsense. But it could be time. Require the company to complete a substantive desire profile that takes real effort. Require them to respond to matches within a certain window. If they consistently let matches expire without engaging, their visibility drops.
A responsiveness score. Basically an Uber rating for companies.
And candidates get one too. Not based on their qualifications, but on their honesty. Did their stated preferences match their actual satisfaction? If someone consistently says "I love collaborative teams" and then reports low satisfaction on collaborative teams, the system stops weighting that preference signal.
This is starting to sound like a reputation economy built on honesty rather than credentials.
Which is kind of radical when you think about it. The entire current system is built on credential signaling — degrees, previous employers, titles. A desire-based system would be built on preference honesty. Your value to the platform isn't your resume, it's your track record of knowing yourself.
That flips the incentive structure entirely. Right now, the incentive is to inflate. Inflate your resume, inflate the job description, inflate your interest. In a desire-based system with a feedback loop, the incentive is to be accurate. Inflating hurts you because it leads to bad matches and bad satisfaction scores.
Let me raise the obvious objection before someone else does. This sounds great for knowledge workers in tech. Does it work for retail? For the millions of jobs where "desire" feels like a luxury?
That's fair. But I think desire exists at every level — it just looks different. Someone in retail might not be looking for "meaning" in the tech-industry sense, but they might have strong preferences about schedule predictability, team culture, customer interaction style. Do you love working the morning rush or do you prefer the quiet closing shift? Do you thrive on customer problem-solving or would you rather be stocking shelves? Those are real preferences that affect satisfaction.
A platform that surfaces those preferences would actually serve hourly workers better than the current system, which treats them as interchangeable units. The "needs-based" framing is most dehumanizing at the bottom of the labor market.
The gig economy is the extreme version of this. Uber doesn't care what kind of driving you enjoy. Amazon Flex doesn't care what kind of deliveries give you satisfaction. You're a capacity unit. A desire-based platform for gig work would be revolutionary — matching drivers who love early mornings with early morning routes, matching shoppers who love the puzzle of efficient packing with complex orders.
The principle scales. The implementation might look different — maybe it's a simpler preference interface, maybe the check-ins are SMS-based instead of app-based — but the core idea holds.
Let's talk about the adversarial problem more deeply though, because I think it's the hardest nut to crack. Even with reputation systems and feedback loops, people are creative. What's to stop a company from creating fake hire records with fake satisfaction scores to boost their visibility?
That's the Sybil attack problem. And I think the answer is — you need some form of verified employment data. Not a full background check, but something that ties a hire to a real person with a real tax ID. The platform doesn't need to know the details, but it needs cryptographic proof that a real employment relationship existed.
Some kind of zero-knowledge proof where the employer and employee both attest to the relationship without revealing sensitive details.
That's the direction. There are privacy-preserving verification protocols that could handle this. The platform knows a hire happened and both sides are providing satisfaction signals, but it doesn't know the salary or the performance reviews or any of the HR-internal data.
This is getting into some pretty sophisticated infrastructure for what started as "what if a job board but desire-based.
Daniel asked how we'd build it technically. I'm taking him seriously.
Let me sketch out what the user experience might actually look like. You join the platform — not by uploading a resume, but by going through a preference interview. The AI asks you a series of questions, adapts based on your answers, and builds a profile. At the end, you see a summary — "Based on your responses, you seem to thrive in roles with high autonomy, creative problem-solving, and small teams. You seem to dislike rigid structure and repetitive tasks. Here are three roles that match.
The candidate can correct it. "Actually, I do like some structure, I just don't like micromanagement." The AI updates. The profile is a living thing, not a static document.
Then the matching happens invisibly. You don't browse listings — the platform surfaces matches to you. Maybe three a day, maybe five a week. Limited quantity, high quality. The opposite of the infinite scroll of Indeed.
When a match appears, it's not a job description. It's a problem statement. "A thirty-person company building logistics software needs someone who loves untangling legacy code and making it testable. The team is quiet, focused, and deeply nerdy. The office has a lot of plants.
I love the plants detail. Those texture signals matter. They help people self-select. If you hate plants, maybe this isn't your team.
The company side sees something similar. Instead of a resume, they see a preference summary. "This person thrives on refactoring chaos into order. They prefer written communication over meetings. They've reported high satisfaction in roles with deep focus time and low interruption.
Neither side sees the other's identity until there's a mutual expression of interest. The platform is the intermediary until both sides say "yes, I want to talk to this person.
Which solves another problem — bias. If you're matching on preferences rather than demographics, you can strip out name, age, gender, ethnicity, educational pedigree. None of that appears until both sides opt in.
The double-blind match. That's powerful.
Now: Hilbert's daily fun fact.
The collective noun for a group of sloths is a "bed" of sloths, which is either adorable or deeply misleading depending on whether you've ever tried to wake one up.
If someone's listening and thinking "this sounds great but it doesn't exist" — what can they actually do with these ideas right now?
First, you can use the Dream Job feature on Greenhouse if you're job hunting. It's the closest thing to a desire signal in the current market, and the data says it works. One application a month where you say "this is the one I actually want." Use it strategically.
Second, even without a platform, you can start thinking in desire terms when you're job hunting. Before you apply anywhere, write down what you'd actually love to work on. Not what you're qualified for — what you'd love. That self-knowledge changes how you evaluate opportunities. You start filtering for fit instead of filtering for "can I get this job.
Third, if you're hiring, write your job description like a desire profile. Be honest about what's hard about the role, what kind of person would love it, what kind of person would hate it. You'll get fewer applicants and better ones.
Fourth, push back on the AI doom loop. If you're a candidate using AI to generate applications, ask yourself whether you're actually expressing desire or just adding noise. If you're a company using AI to filter, ask whether you're actually finding fit or just pattern-matching keywords.
The broader point is that the system is broken because it's optimized for volume, not fit. Any move you can make toward fit — toward genuine preference, toward honest signaling — is a move away from the doom loop.
The open question I'm left with is whether a desire-based platform would actually be commercially viable. The current system makes money on volume — job boards charge per post, LinkedIn sells premium subscriptions that encourage more applications, ATS companies charge based on the number of seats. A platform that deliberately reduces volume in favor of fit would be betting that companies will pay more for better hires. That's not obvious.
The Greenhouse data suggests they might. If Dream Job applicants get hired four to five times faster, that's real cost savings. Time-to-hire is expensive. Bad hires are even more expensive. There's a business case for quality over quantity — it's just that nobody has built the full-stack version yet.
The congestion problem isn't going away. The AI doom loop is accelerating. At some point, the market tips and quality-based matching stops being a nice idea and starts being a competitive necessity.
When that happens, I hope whoever builds it remembers the plants detail.
Thanks to our producer Hilbert Flumingtop for the fun fact and for keeping this show running. This has been My Weird Prompts. You can find every episode at myweirdprompts dot com or wherever you get your podcasts. If you enjoyed this, leave us a review — it helps.
I'm Herman Poppleberry.
I'm Corn. See you next time.