Daniel sent us this one, and it's something he's been sitting with for a while. He wants to dig into the history of the low-code and no-code movements, and more pointedly, whether those movements are on their way out. His core grievance is that these tools create walled gardens and are never quite honest about their ceilings. And his argument is that AI code generation is better not just because it lowers the barrier to entry, but because it actually shows you the path to doing things properly. He's rooting for agentic code generation to be the death knell for low-code. But he's also asking whether the data supports that, and whether low-code still has a legitimate role. By the way, today's episode is powered by Claude Sonnet four point six.
Which is fitting, given the topic. Alright, so the market figures here are genuinely striking. The low-code and no-code market was sitting at around forty-five and a half billion dollars in twenty twenty-five, and projections have it reaching over a hundred billion by twenty thirty. Seventy-five percent of new enterprise applications are projected to use low-code or no-code tooling by the end of this year. Those are not numbers that suggest a dying movement. And yet Daniel's instinct that something is structurally wrong with these platforms is, I think, basically correct. The market size and the underlying quality of what's being built are two very different conversations.
Right, and that gap is exactly what makes this interesting. The money is flowing in, the adoption numbers look healthy on paper, but the complaint from developers and even from a lot of enterprise teams is that they keep hitting walls they weren't warned about. So let's actually start with where this came from, because I think the history matters here. This didn't appear out of nowhere—its roots go way back.
The lineage goes back further than most people realize. You can trace a direct line to the Rapid Application Development work in the eighties and early nineties, the Visual Basic era, the 4GL database tools. The promise was always the same: get closer to the problem domain, reduce the gap between what a business person wants and what gets built. Forrester actually gave "low-code" its formal name in 2011, but the underlying impulse is decades older.
Which means we've had roughly forty years to figure out why it keeps running into the same wall.
That's the thread worth pulling. Every generation of these tools arrives with the same pitch and runs into the same structural problem. The abstractions that make things easy at the start are the same abstractions that trap you later. The platform has to make choices about what's possible, and those choices are invisible to you until you're already committed.
At its core, this is a question about where the complexity goes. Every tool has to put the complexity somewhere. Low-code says: we'll absorb it. AI code generation says: we'll show it to you, but in a form you can work with.
That's a good way to frame it. And what AI generation does differently, especially the agentic stuff, is that it doesn't hide the seams. When GitHub Copilot writes you a function, you can read that function, modify it, understand it, build on it. The artifact is transparent. When a no-code platform generates some internal representation of your workflow, you often can't even see what it's doing underneath.
The path to learning exists in one case and is actively sealed off in the other.
Which is exactly Daniel's point, and I think it's the right lens for everything we're about to get into.
Let's actually stress-test that. Because I hear Daniel's argument, and I find it compelling, but Herman, you've been skeptical of no-code platforms before without necessarily writing them off entirely. Where does the walled garden problem actually bite hardest?
The customization ceiling is the most obvious one, but I think the vendor lock-in issue is actually more damaging in practice and gets less attention. When you build something substantial on a platform like OutSystems or Appian or even the Microsoft Power Platform, your business logic, your data models, your automation flows, they're encoded in proprietary representations that don't export cleanly to anything else. You haven't built software, you've built a configuration of someone else's software. And the moment that vendor changes their pricing, or deprecates a feature, or just makes a product decision you don't like, you have very limited options.
You're essentially a tenant who's wallpapered the whole apartment.
Load-bearing wallpaper at that. The transparency problem is related but distinct. It's not just that you can't leave, it's that while you're there, you can't see what's actually happening. A workflow that looks simple in a drag-and-drop interface might be generating deeply inefficient database queries underneath, and you have no visibility into that until your application starts performing badly under real load. By which point you've already built a team around this tool, you've trained people on it, it's in production.
Why does this keep happening, though? Because these vendors aren't hiding it maliciously. They're making design choices that optimize for the onboarding experience.
And it's almost a structural inevitability. The value proposition of these platforms is that non-technical people can build things. To deliver that, you have to abstract away complexity. But abstraction means decisions get made for you. The platform assumes you want the most common case, and it bakes that assumption in at a level you can't easily reach. The further you get from the most common case, the more friction you encounter. And the dirty secret is that most real business problems, once they get past a certain scale or specificity, are not the most common case.
There's a CMARIX breakdown I saw recently that projects eighty percent of low-code users by the end of this year will be outside IT departments entirely. So you've got citizen developers, business analysts, operations people building things that are going into production, and they have even less visibility into these failure modes than a trained developer would.
Which compounds everything. A developer hitting a customization ceiling at least understands what they're hitting. A business analyst doesn't know what they don't know, and the platform isn't going to tell them. The governance and security implications of that are alarming in enterprise contexts. You end up with shadow IT, but shadow IT that has a vendor's logo on it, so it feels legitimate.
Now contrast that with what happens when someone uses an AI code generation tool to build the same thing. Walk me through the actual difference.
The artifact is real code. That's the fundamental thing. If I use Copilot or Cursor or any of the current agentic coding tools to build, say, a customer intake workflow, what I get back is Python or TypeScript or whatever I asked for. I can read it. A senior developer can review it. I can put it in version control, I can test it, I can modify any part of it, I can swap out the database layer without touching anything else. The transparency isn't just a nice-to-have, it's what makes the whole thing maintainable over time.
Daniel's point about the learning path is real in that context too. Because when you're reading that generated code and you don't understand a piece of it, you can ask. The tool explains it. You're actually moving along a competence curve.
Whereas with a no-code platform, even after years of using it, you haven't learned anything transferable. You've learned that platform. The knowledge doesn't compound in the same way, and that's where the real problem lies.
It's a compounding problem that I think gets undersold. It's not just that the knowledge is siloed, it's that the ceiling arrives before you even realize you've been learning the wrong thing. Which brings us to what happens at scale, because the enterprise picture here is where it gets complicated.
Microsoft is the case study I keep coming back to. The Power Platform has been one of the most aggressively marketed low-code ecosystems of the last several years. Huge adoption numbers, deeply embedded in enterprise workflows. And what happened when they started integrating Copilot into it? The feature that got the most traction wasn't the drag-and-drop builder. It was the natural language to automation pipeline, where you describe what you want and it generates the underlying logic for you. Users were essentially routing around the visual interface the moment they had an alternative.
Which tells you something about what people actually wanted from these tools in the first place. They didn't want the drag-and-drop experience. They wanted the outcome, and the drag-and-drop was just the least bad option available at the time.
That's a really important distinction. The demand was always for accessibility, not for the specific mechanism. Low-code captured that demand because it was the first reasonable answer. AI generation is a better answer to the same question, and the Power Platform Copilot data suggests users recognized that pretty fast.
For a developer or a business making a decision right now, what does that actually mean in practice? Because the enterprise procurement cycle is not fast. People have contracts, they have trained teams, they have workflows in production.
The practical reality is that most enterprises are not going to rip out their existing low-code deployments in the next twelve months. The switching costs are real. But the more interesting question is what they build next. And there, the pattern is pretty clear. New greenfield projects are increasingly starting with AI-assisted code generation rather than no-code platforms. The Gartner and Forrester analyst notes from the last year have been pretty consistent on this. The growth in low-code adoption is largely in the long tail, smaller organizations, citizen developer use cases, internal tooling that doesn't need to scale. The enterprise edge is shifting.
Which brings us back to Daniel's actual question about the data. Because the market size figures look like growth, but if you peel back what's actually growing, it might not be the part that competes with AI generation.
That's where I think the headline numbers are misleading. The low-code market reaching forty-five billion and projecting past a hundred billion by twenty thirty, that's real. But a significant chunk of that growth is in AI-enhanced low-code, platforms that are essentially wrapping AI generation inside a low-code interface. So the market is growing partly because it's absorbing the thing that was supposed to replace it.
That is either very clever or a sign that the category is quietly becoming something different and keeping the old name.
And it raises a real question about what "low-code" even means in three years. If the underlying mechanism is AI generation but the interface is still drag-and-drop, are you using a low-code tool or an AI tool with a low-code skin?
The distinction matters more for the walled garden problem than for anything else. Because if the AI-enhanced low-code platform is still exporting proprietary representations you can't inspect or migrate, you've just put a nicer face on the same trap.
The test isn't the interface, it's the artifact. Does what gets built belong to you in a form you can actually work with? A lot of the AI-enhanced low-code platforms are still failing that test. The Copilot inside Power Platform generates Power Automate flows, not portable code. You're still locked in, just with a more pleasant onboarding experience.
For a developer who's watching this shift happen, what's the practical posture? Because there's a version of this where the right answer is just "learn to use the AI generation tools and stop worrying about low-code entirely," and there's another version where the coexistence is real and you need to know when each applies.
The honest answer is that both are partially right. If you're a developer, the case for going deep on AI generation tooling is strong and getting stronger. The productivity numbers are not subtle. GitHub's own data from Copilot adoption shows meaningful reductions in time-to-completion on standard coding tasks, and the agentic tools are compressing that further. The learning path that Daniel talks about is real. You can watch the code get generated, ask questions about it, modify it, understand why decisions were made. That feedback loop is valuable in a way that a no-code builder can't replicate.
The coexistence case?
It's narrower than the market figures suggest, but it's not nothing. There are low-stakes internal tooling scenarios where a business analyst building a form or a simple approval workflow in a no-code platform is the right call. Not everything needs to be portable code. Not everything is going to scale. If the ceiling is never going to matter, the simplicity is a real advantage. The mistake is treating that as the general case rather than the exception.
The failure mode is when you start with the exception and then discover three years later that it became load-bearing.
Which is how most of the vendor lock-in horror stories actually happen. Nobody plans to build something critical on a no-code platform. They build something small, it works, it gets adopted, it gets extended, and suddenly it's in the critical path and you're negotiating with a vendor from a very weak position.
Right, and that’s exactly why it’s such a tricky decision. So what do you actually tell someone who’s trying to make a choice today? The pattern is clear, but the path forward isn’t always obvious.
Start with the artifact question. Before you commit to any tool, ask yourself: what does the output look like, and do I own it in a form I can work with independently? If the answer is "a proprietary representation inside this vendor's ecosystem," you need to be very deliberate about whether the ceiling that comes with that is one you'll ever hit. If there's any realistic chance the thing grows, or becomes important, or needs to integrate with something the vendor doesn't support, the answer is almost certainly AI generation tooling.
For developers specifically, the productivity case is strong enough now that there's not much of an argument for avoiding it. GitHub Copilot adoption data shows meaningful compression on standard task completion times, and the agentic tools are pushing that further. The learning path compounds in a way that low-code never did.
The practical entry point I'd suggest is to pick one real project, something with actual stakes but not mission-critical, and use an agentic coding tool end to end. Not to generate a snippet here and there, but to let it drive the architecture, read what it produces, push back on decisions you don't understand, ask it to explain the tradeoffs it made. That feedback loop is where the learning actually happens. You come out the other side with code you understand and a mental model of how it was built.
The low-code case, narrow as it is?
low-stakes, bounded, non-scalable. A form, an internal approval flow, something a business analyst needs to modify without developer involvement on a regular basis. If all three of those conditions are true, the simplicity is a real advantage. The mistake is applying that logic to anything that might grow.
The tell is usually whether the word "just" is doing too much work. "It's just a quick internal tool." "It's just a simple workflow." When you hear "just" that many times, somebody is probably underestimating the ceiling problem.
That is a very good heuristic. And the staying-ahead piece is honestly simpler than people make it. The developers who are going to be most valuable in the next few years are not the ones who avoided AI tools to preserve some sense of craft. They're the ones who used the tools aggressively, understood what the tools were doing and why, and built the judgment to know when the generated output needed to be challenged. The skill is evaluation, not just generation.
Which is something Daniel has touched on before, actually. The ability to criticize the output, to diagnose what's wrong with it, is more durable than knowing how to prompt well.
Much more durable. Prompting strategies shift with every model generation. The ability to read code, understand its tradeoffs, and catch what the model got subtly wrong—that compounds. That's the investment worth making.
And that's probably the most honest framing of where this whole thing lands. The tools are getting better faster than most people's mental models of them are updating.
Which is the open question I keep sitting with. The low-code market projections out to twenty thirty are real, the money is real, the enterprise contracts are real. But the category is quietly hollowing out from the inside. The interesting part isn't whether low-code survives as a label. It's whether the thing that survives under that label is actually what the original movement was about.
The walled garden problem doesn't go away just because the interface gets a language model bolted onto it. That's the thread I think Daniel is right to pull on. The question of who owns the output is going to define which of these platforms are actually building toward something durable and which are just extending the lock-in with a friendlier face.
The agentic side is moving fast enough that the comparison point will look very different in two years. The gap between what an agentic coding tool can do for a non-developer today and what it could do eighteen months ago is not incremental. It's structural. So I'd be cautious about any forecast that treats the current equilibrium as stable.
Big thanks to Hilbert Flumingtop for producing the episode. And Modal keeps our pipeline running, which we are grateful for every time we ship one of these.
This has been My Weird Prompts. If you've been listening for a while and haven't left a review, now is a good time. It makes a real difference. Find us at myweirdprompts.
We'll see you next time.