You know, Herman, I was looking at my browser tabs this morning, and I realized something depressing. Every single one of them—from my email to my project tracker to the app I use to order coffee—has a little sparkling "AI" icon in the corner now. It is like the tech industry collectively decided that if you do not have a glowing purple gradient somewhere on your landing page, you do not exist.
It is the Great Sparkle Migration of twenty twenty-four and twenty twenty-five, Corn. We are living through the peak of the "AI-washed" era. Honestly, today's prompt from Daniel is about exactly that—separating the wheat from the chaff. He is asking us to look at this new wave of "AI-first" SaaS apps and figure out who is actually building something new and who is just putting lipstick on a chatbot pig.
Lipstick on a chatbot pig. That is a vivid image I did not need before my second coffee. But he is right to ask. It is getting harder to tell. You open an app, it asks if you want it to "summarize" or "rewrite in a professional tone," and that is it. That is the "AI revolution" for most legacy companies. By the way, speaking of revolutions, today's episode is powered by Google Gemini three Flash. It is the one pulling the strings behind the curtain today.
Herman Poppleberry here, ready to pull some curtains back myself. And look, the distinction Daniel is poking at is fundamental to how software is going to work for the next decade. There is a massive architectural gulf between "AI-native" and "AI-retrofit." If you are a legacy player like Salesforce or HubSpot, you have decades of technical debt and a rigid data model built for humans to manually enter data into boxes. You cannot just flip a switch and become AI-native. You end up bolting on a sidebar.
Right, the "sidebar syndrome." It is like adding a GPS to a horse and buggy. Sure, the horse knows where it is going now, but you are still shoveling manure and moving at five miles per hour. So, for the sake of the listeners who are drowning in subscription fees, how do we actually define "AI-first" without just repeating marketing fluff? What is the technical "litmus test" here?
For me, it comes down to the data model and the workflow. In a traditional SaaS app, the software is a passive bucket. It sits there and waits for you to type. You are the integration engine. You take data from an email, you summarize it, you click "New Lead," you paste it in. In an AI-native app, the AI is the primary engine for data structuring. It does not wait for you to fill out a form; it watches the "digital exhaust"—your emails, your calls, your Slack messages—and it builds the database for you.
So, instead of a "System of Record," where I am the data entry clerk, it is a "System of Intelligence" where the app is actually doing the observational work?
That is the core of it. And you can see this clearly when you look at the new wave of CRMs. Take a company like Attio, for example. If you look at how they handled the transition in twenty twenty-five, they moved away from the rigid objects of the past. Instead of saying "here is a contact record with twenty fixed fields," they use what they call "Magic Fields." These are powered by large language models that scan your real-time communication history. It can categorize a lead or calculate deal risk based on the actual tone of your last three emails, not just because a salesperson checked a box saying "feeling good about this one."
I love that, because salespeople are notoriously optimistic—or pessimistic—depending on how close they are to the end of the quarter. Having an objective LLM look at the "exhaust," as you put it, and say, "Actually, the client hasn't replied to the last two follow-ups and mentioned a budget cut in the first call," that is a genuine value-add. It is moving the AI from the sidebar into the actual data pipeline.
And that is the key! If the AI is in the pipeline, it can maintain context. One of the biggest problems with the "bolted-on" chatbots in legacy tools is the context window problem. You open the AI sidebar in a document editor, you ask it a question, and it only knows about the text currently on the screen. It doesn't know about the five other project docs in your folder. It doesn't know what you discussed in the meeting yesterday. It is a goldfish.
A very polite, very fast goldfish that can write poetry. But still a goldfish.
Well, not exactly—I promised I wouldn't say that word. You are right. It is a goldfish. Compare that to something like Folk. They are calling themselves a "Relationship Manager" rather than a CRM. Their AI, which they call "Magic Columns," doesn't just summarize; it executes. It scans LinkedIn updates and recent news about your contacts and then drafts hyper-personalized "icebreakers." It is not just saying "write a cold email." It is saying "write an email mentioning that this person just got promoted and their company just opened a new office in Austin."
See, that feels like a workflow shift. It is not just making the writing part faster; it is removing the "research and synthesis" part of the job that usually takes up eighty percent of the time. But let's talk about the "action execution" gap. This is where I see a lot of these apps falling over. It is one thing to suggest a follow-up; it is another thing to actually manage the project.
That is where the project management space is getting really interesting. We have seen a huge divide between the "writing assistants" and the "agentic" tools. Notion is the classic example of a legacy tool that moved fast—they added AI blocks very early. And look, Notion AI is great for drafting a blog post or summarizing a meeting note. But is it managing your project? Not really. It is still a document editor with a very smart intern sitting inside it.
Right, it is "AI as a feature" versus "AI as an employee."
That is a great way to put it. Now, look at Taskade. They have taken a completely different architectural path. Instead of just a writing block, they built a system of autonomous AI Agents. In Taskade, you can actually deploy multiple agents to work on different parts of a project simultaneously. You can have one agent whose entire job is to research market trends, another that audits your current tasks for bottlenecks, and a third that drafts the project plan. They work in the background while you are doing other things.
Wait, so it is not just me prompting a box and waiting for an answer? They are actually working on the "canvas" of the project?
Yes. They introduced something called "Taskade Genesis" in late twenty twenty-five. It basically lets you build custom mini-apps or automations using natural language. You tell it, "I want a workflow that monitors our support inbox and automatically creates a high-priority task if a customer mentions a billing error," and the AI builds that logic into the workspace. It is not just "summarizing" the support ticket; it is building the infrastructure to handle it.
That feels like the "Vibe Coding" shift we have talked about before, but applied to business processes. If I can describe an automation and the PM tool just... becomes that automation... that is a massive leap over Jira or Triage where I have to learn a complex query language or set up twenty different "if-this-then-that" rules manually.
It is the death of the "configuration specialist" role. In the old world, you had to hire a "Salesforce Admin" or a "Jira Expert" to make the tool work for your team. In the AI-native world, the tool adapts to the team's natural language. Linear is another great example of this in the developer space. They have stayed relatively quiet about "AI" in their marketing compared to others, but they integrated what they call "Product Intelligence."
I have used Linear. It feels incredibly snappy, but I didn't realize there was a deep AI layer there. What are they doing differently?
They are attacking the "triage" problem. If you are running a software team, the most painful part of your week is looking at a hundred bug reports and feature requests and deciding what is important, who should do it, and which milestone it belongs to. Linear's AI layer looks at the historical behavior of your team—who usually handles what, how long certain types of tasks take, what the code context of the bug report is—and it automatically triages the incoming work. It is not a chatbot you talk to; it is a background process that keeps the deck clear.
That is the "invisible AI" approach. I think I prefer that over the "chat with your data" trend. I don't want to have a conversation with my issue tracker. I want my issue tracker to stop bothering me with manual labor.
I think that is a sentiment a lot of people share. The "chat" interface was a great way for us to discover what LLMs could do in twenty twenty-three, but by twenty twenty-six, it is starting to feel like a burden. If I have to type a long prompt to get a simple result, the "efficiency" gains start to evaporate. The real winners in this "AI-first" wave are the ones that use the AI to reduce the number of clicks and the amount of typing I have to do.
So, let's look at the second-order effects of this. If we are moving toward these agentic, autonomous tools, what does that do to team collaboration? If the AI is triaging the bugs and drafting the outreach, what are the humans actually doing in these apps?
They are becoming "Editors-in-Chief" or "Directors." The work shifts from "execution" to "intent." You are setting the strategy and then reviewing the outputs. But this creates a new set of problems, Corn. Specifically, the "context collapse" problem. If you have five different AI agents working on a project, how do you ensure they are all aligned with the latest strategy change you made in a different meeting?
Right, if I tell my "Research Agent" one thing and my "Sales Agent" another, and they don't talk to each other, I am just managing a digital version of a dysfunctional office. Which brings up an interesting technical point: how are these AI-native apps handling the "shared memory" across these agents?
This is where the underlying architecture really matters. The legacy tools that retrofitted AI usually have a "stateless" implementation. Every time you open the AI sidebar, it is like a fresh start. The AI-native apps are building "Vector Databases" or "Knowledge Graphs" directly into the core of the app. So, when you change a project goal in the "Strategy" tab, that change is immediately propagated as "context" to every agent working in every other tab. It is a unified brain for the company.
That sounds expensive. Both in terms of compute and in terms of the "AI premium" these companies are charging. I have noticed that a lot of these "AI-powered" legacy tools are tacking on fifteen, twenty, or even thirty dollars a month per user just for the "AI features." Is that sustainable?
It is the "subscription graveyard" all over again. Gartner put out some interesting data recently showing that while eighty-five percent of software vendors added AI features in twenty twenty-four, only twelve percent actually integrated it into the core workflow. Yet, almost all of them are trying to charge a premium for it. Users are starting to realize they are paying thirty dollars a month for a "summarize" button they use twice a week.
That is a terrible ROI. I mean, if I am paying for an AI-native tool like Taskade or Attio, I am essentially paying for a digital employee. If that employee is actually doing the work of a junior analyst or a virtual assistant, thirty dollars is a steal. But thirty dollars for a "professional tone" rewriter? That is a scam.
And that is why we are seeing a shift in pricing models. Taskade, for example, has experimented with moving away from "per-seat" pricing toward "per-outcome" or "agent-based" pricing. They offer unlimited AI agents for a flat team fee. They are challenging the legacy model because they know that in an AI-driven world, adding a "human seat" shouldn't necessarily cost more if the AI is doing the bulk of the heavy lifting.
That is a fascinating pivot. It acknowledges that the "unit of value" in software has changed. It used to be "access to the tool." Now it is "work performed by the tool."
Precisely. Well, there I go again. You are absolutely—wait, no. You are right on the money. The "work performed" is the new metric. Look at SleekFlow in the consumer and retail space. They are doing "agentic commerce." Instead of just a chatbot that says "here is our shipping policy," their AI agents can actually handle a full sale on WhatsApp or Instagram. They can check inventory, process a payment, and schedule a delivery without a human ever touching the keyboard. If you are a small business owner, you don't care about "per-seat" pricing; you care about how many sales that agent closed while you were sleeping.
Okay, so we have established that AI-native tools are winning on architecture, context, and potentially pricing. But what about the risks? If I move my entire business process into an "AI-first" CRM like Attio, am I just creating a new kind of vendor lock-in? If the AI has "learned" my business through its observations, how do I move that "intelligence" if I want to switch tools later?
That is the "Data Sovereignty" nightmare of twenty twenty-six. In the old world, you could export a CSV of your leads. In the AI-native world, the value isn't just the list of leads; it is the "inference" the AI has made about them. How do you export a "probability of churn" model that was built on two years of email history? Right now, you can't. We don't have a standard for "AI Model Portability."
So we are trading "manual labor" for "algorithmic capture." It is a bit of a devil's bargain. I get my weekend back, but the software now knows more about my business than I do, and I can't leave.
It is a risk, for sure. But the counter-argument is that these AI-native tools are showing forty to sixty percent higher user retention rates in recent benchmarks compared to the legacy tools. People aren't staying because they are locked in; they are staying because the tool is actually solving the "mental load" problem. If I go back to Salesforce after using Attio, I feel like I am being asked to build my own car before I can drive to work.
I get that. Once you have seen the "Magic Fields" fill themselves in, going back to manual data entry feels like a punishment. It is like trying to use a paper map after getting used to Google Maps. You can do it, but why would you?
And this brings us to the "Actionable Takeaways" part of the show. If you are a small business owner or a team lead drowning in these "AI-washed" options, how do you actually audit your stack? I have a "three-question test" I have been developing for this.
I love a good framework. Lay it on me.
Question one: Does the AI learn from your data passively, or do you have to feed it manually? If you have to highlight text and click "Summarize," it is a retrofit. If it sees your emails and updates a field automatically, it is native.
Okay, that is a clean line. Passive observation versus active prompting. What is number two?
Question two: Can the AI execute an action in another part of the app, or is it restricted to a sidebar? If the "AI Assistant" can create a task, assign a budget, and notify a teammate based on a conversation, it is integrated. If it just gives you a checklist and says "go do this," it is a glorified search engine.
"Glorified search engine." Ouch. And the third one?
Question three: Does it maintain state across sessions? If you ask it a question today, does it remember what you told it yesterday? This is the context window test. If every interaction is a "first date," it is not an AI-first tool. It is just a wrapper for a generic model.
That is a solid framework. I would add a fourth one, more on the "red flag" side: Look at the "coming soon" list. If a company's "AI Roadmap" is full of basic things like "PDF summarization" or "Email drafting," they are playing catch-up. They are trying to tick boxes for investors. The real innovators are talking about "agentic workflows" and "automated triage" and "outcome-based pricing."
That is a great point. The "feature checklist" is the enemy of genuine innovation. You see these legacy companies announcing "fifty new AI features" in a single press release. No human can meaningfully use fifty new AI features in a single app. It is just noise. The AI-native tools usually have fewer "features" but a much deeper "capability."
It is the difference between a Swiss Army knife where every blade is dull and a really high-end chef's knife. I only need one tool if it is the right tool and it stays sharp. So, looking forward, do you think the legacy tools can ever actually close this gap? Can HubSpot or Notion eventually "re-architect" themselves to be truly AI-native, or are they destined to be the "IBM Mainframes" of the AI era?
History suggests it is incredibly hard. It is the "Innovator's Dilemma." If Salesforce truly moved to an AI-native, automated data model, they would destroy the business model of the thousands of "Salesforce Consultants" who make their living on the complexity of the old system. They have an ecosystem that relies on the software being difficult to use.
Right, the "Complexity Industrial Complex." If the software works perfectly out of the box, you don't need a ten-thousand-dollar-a-month agency to manage it. That is a lot of people to make angry.
Oops, I did it again. I am so sorry. You are right. It is a massive hurdle. Meanwhile, the new players like Taskade and Attio and Linear don't have that baggage. They can build for a world where the AI is the primary user of the software, and the human is just the supervisor. That is a fundamentally different design philosophy.
It is "Design for AI" versus "Design for Humans with AI as a sidekick." I think we are going to see a massive consolidation in the next eighteen months. The "subscription graveyard" is going to get a lot of new residents as people realize they can replace four or five "AI-washed" tools with one genuinely AI-native platform.
I think that is already happening. The "SaaSpocalypse" of twenty twenty-five was the first wave of that. Companies realized they were paying for twenty different "AI assistants" that didn't talk to each other. The move toward "Agentic" tools is the logical next step. Why pay for a PM tool, a CRM, and a separate automation tool like Zapier when an AI-native platform can handle the data pipeline, the project tracking, and the execution all in one?
It is a return to the "All-in-One" suite, but this time it actually works because the AI is the glue that holds the different modules together. Well, this has been a trip. I feel like I need to go audit my own browser tabs now and see how many "goldfish" I am currently paying for.
Just look for the sparkles, Corn. If it sparkles but doesn't do your chores, it is time to let it go.
Sound advice. Before we wrap up, I want to mention that if you are interested in the "build it yourself" side of this—moving away from the "SaaS graveyard" entirely—you should check out our older conversation in episode sixteen hundred and three, "Fire Your Software Subscriptions and Just Code the Vibe." It is a good companion to this if you are feeling particularly rebellious against the subscription model.
A classic. And a big thanks to our producer, Hilbert Flumingtop, for keeping the gears turning behind the scenes.
And of course, thanks to Modal for providing the GPU credits that keep our digital brains humming. This has been My Weird Prompts.
If you found this useful, we are on Spotify and would love a follow there to help more people find the show.
Or find us at myweirdprompts dot com for the full archive and RSS feed. Catch you in the next one.
See ya.