#1220: APIs for Agents: Navigating REST, GraphQL, and MCP

Why can't we just give AI the database password? Explore the shift from REST to GraphQL and how the Model Context Protocol changes the game.

0:000:00
Episode Details
Published
Duration
21:55
Audio
Direct link
Pipeline
V5
TTS Engine
chatterbox-regular
LLM

AI-Generated Content: This podcast is created using AI personas. Please verify any important information independently.

The Evolution of the Digital Contract

The history of software development is, in many ways, a history of middlemen. While developers often dream of a "database-as-an-API" model—where clients query data directly without intermediary layers—the reality of production environments requires guardrails. APIs exist to provide three essential pillars: security, stability, and semantics.

An API acts as a promise. It creates a stable interface that allows the backend and frontend to evolve independently. This abstraction prevents "leaky abstractions," where the internal mess of a storage engine spills out into the client-side code. If a database schema changes, the API remains the same, ensuring that the application doesn't break every time an engineer optimizes a table or renames a column.

From RESTful Nouns to GraphQL Relationships

For over a decade, REST (Representational State Transfer) dominated the landscape by leaning into the existing architecture of the internet. By using standard HTTP verbs to treat data as resources, REST turned the web into a queryable library of nouns. However, as applications grew in complexity, developers hit the twin walls of over-fetching and under-fetching.

GraphQL emerged as a solution to these inefficiencies by moving from resources to graphs. Instead of making multiple sequential requests to different endpoints, GraphQL allows the client to define the exact shape of the data it needs in a single call. This shift turned relationships between data points into first-class citizens, though it introduced new risks, such as the "N+1 query problem," which can inadvertently crash a database through recursive, unoptimized lookups.

APIs in the Age of AI Agents

The rise of Large Language Model (LLM) agents has introduced a new primary consumer for these interfaces. Unlike human developers, agents must "see" and understand an API through documentation or schemas. This is where the technical trade-offs between REST and GraphQL become critical.

REST APIs typically rely on external documentation like OpenAPI specifications. These can be massive, outdated, and difficult for an AI to parse without "hallucinating" parameters. In contrast, GraphQL’s built-in introspection allows an agent to ask the server for its own schema. This self-documenting nature provides a high-definition map for the agent, significantly lowering the "token cost" of discovery.

The Model Context Protocol (MCP) and the Sanity Layer

The Model Context Protocol (MCP) is emerging as the translation layer between these two worlds. It doesn't favor one architecture over the other but instead focuses on how "tools" are exposed to an agent. While REST offers a narrow, predictable corridor of behavior that is easier to rate-limit and secure, GraphQL offers the flexibility for an agent to act as a creative data scientist.

For many organizations, the real value of MCP lies in its ability to act as a "sanity layer." Most enterprise systems are not clean; they are a patchwork of legacy REST APIs and ancient databases. By building an MCP server, developers can wrap this technical debt in a modern, structured interface. This allows the AI agent to operate in a clean, simulated environment while the underlying server handles the messy translation to legacy systems. Ultimately, the goal is to reduce the "agentic friction" of the modern web, creating a world where automation can navigate complex infrastructure safely and efficiently.

Downloads

Episode Audio

Download the full episode as an MP3 file

Download MP3
Transcript (TXT)

Plain text transcript file

Transcript (PDF)

Formatted PDF with styling

Read Full Transcript

Episode #1220: APIs for Agents: Navigating REST, GraphQL, and MCP

Daniel Daniel's Prompt
Daniel
Custom topic: Let's talk about the history of APIs and the difference between REST and GraphQL APIs. When it comes to wrapping these with MCP servers to expose tools ... does either have an inherent advantage in te
Corn
I was thinking about something today that I know is going to get you fired up. I was looking at a set of database credentials and I thought, why am I writing all this code to fetch a user profile? Why do we have this massive, complex industry built around middleman layers? Today's prompt from Daniel is about the history of APIs, the divide between REST and GraphQL, and how all of this fits into the new world of the Model Context Protocol. But before we get into the technical weeds, I want to start with the fundamental question. Why can't I just give an artificial intelligence agent the password to my Postgres database and say, hey, go nuts?
Herman
Herman Poppleberry here, and man, you are speaking my language. That idea of the database-as-an-API is the ultimate developer siren song. It sounds so efficient, right? You cut out the middleman, you remove the latency of an application server, and you let the client just query the data they need. But the reason we don't do that, and the reason APIs exist in the first place, comes down to three things: security, stability, and semantics. If you give a client, or especially an autonomous agent, direct access to your database, you have effectively handed them the keys to the kingdom with no guardrails. You are essentially saying, I trust this model to not only understand my data but to respect the implicit business rules that aren't actually written down in the schema.
Corn
But we have row level security now. We have complex access control lists inside the database itself. Projects like PostgREST or Supabase have made it much easier to expose a database directly. Is it really just a security issue, or is there something about the way we structure information that requires that middleman?
Herman
It is much deeper than just security. Think about the contract. An API is a promise. When I build a REST API and I tell you that hitting the slash users endpoint will return a JSON object with a name and an email, I am creating a stable interface. I can change my database schema behind the scenes. I can rename the columns, I can move the data to a completely different type of database, or I can split one table into five. As long as my API still returns that same JSON object, your code doesn't break. If you were querying the database directly, every single time I optimized my storage layer, your application would go up in flames. This is what we call the "leaky abstraction" problem. Without the API, the internal mess of your storage engine leaks out into the world.
Corn
So it is an abstraction layer that allows the backend and the frontend to evolve at different speeds. That makes sense. It is like the steering wheel in a car. I don't care if the car has a rack and pinion system or electronic power steering; as long as I turn the wheel left and the car goes left, the interface is doing its job. But let's look at how that interface has changed. We went from these heavy, XML-based protocols like SOAP and Remote Procedure Calls to REST, which dominated for over a decade. Why did REST win so convincingly?
Herman
REST, which stands for Representational State Transfer, was defined by Roy Fielding in his doctoral dissertation back in the year two thousand. It won because it leaned into the existing architecture of the internet. It used HTTP verbs like GET, POST, PUT, and DELETE to treat everything as a resource. It was simple, it was stateless, and it was human-readable. You didn't need a special library to understand what was happening; you could just look at a URL and know exactly what you were requesting. It was a very American way of solving a problem: keep it decentralized, keep it standardized, and make it as easy as possible to build on top of. REST turned the web into a giant, queryable library of nouns.
Corn
But then we hit a wall. As our apps got more complex, we started running into the over-fetching and under-fetching problems. I remember when Facebook open-sourced GraphQL in twenty-fifteen. The promise there was that we were moving from resources to graphs. Instead of hitting five different REST endpoints to get a user, their posts, and their followers, you could do it in one single query.
Herman
That was the massive shift. In a REST world, the server defines the shape of the data. You take what the server gives you, even if you only need the user's first name and the server sends you their entire biography and twenty-five different metadata fields. That is over-fetching. It wastes bandwidth and increases latency. Or, conversely, the server doesn't give you enough, so you have to make three more requests to get the related data. That is under-fetching, and it leads to the "waterfall" problem where your app is stuck waiting for sequential network calls. GraphQL flipped the script. It allowed the client to say, here is exactly the data I want, and here is how I want it shaped. It turned the API into a queryable graph where the relationships between data points are first-class citizens.
Corn
It feels like GraphQL is almost moving back toward that database-as-an-API dream, but with a safety layer in the middle. But here is where it gets interesting for our current era. We are now in a world where the primary consumer of these APIs is often not a human developer or a mobile app, but an LLM-based agent. When we talk about wrapping these interfaces in something like the Model Context Protocol, does one of these architectures have a leg up?
Herman
This is where the technical trade-offs get really spicy. If you are building an MCP server to expose tools to an agent, you have to think about how that agent "sees" the API. REST APIs usually rely on something like an OpenAPI specification, which used to be called Swagger. It is basically a giant document that describes every endpoint, every parameter, and every possible response. The problem is that these documents are often massive, they are frequently out of date, and they can be very ambiguous for an AI to parse. An agent has to read through hundreds of lines of YAML or JSON just to figure out how to get a user's email address.
Corn
I have seen that firsthand. You give an agent a poorly documented REST API and it just starts hallucinating parameters that don't exist because it is trying to guess how the developer's brain worked. It is like trying to navigate a city with a map that was drawn from memory by someone who lived there ten years ago. It is inefficient and, frankly, dangerous if the agent starts trying to "guess" its way through a DELETE request.
Herman
Now, compare that to GraphQL. GraphQL has a built-in advantage here called introspection. Every GraphQL server is self-documenting by design. You can ask a GraphQL API, tell me everything you can do, and it will return a perfectly structured schema of every type, every query, and every mutation available. For an agent, that is like having a high-definition, real-time GPS map. It doesn't have to guess. It can look at the schema and say, okay, I see a user type, it has a field called email, and I can filter it by this specific ID. The "token cost" of discovery is much lower because the schema is so dense and structured.
Corn
So you are saying GraphQL is inherently more agent-readable because it is more strictly typed and self-describing?
Herman
In theory, yes. But there is a huge catch. GraphQL is much harder to implement correctly on the server side. You have to deal with the N plus one query problem. This happens when a single complex GraphQL query accidentally triggers hundreds of individual database lookups. For example, if an agent asks for ten users and their last five comments, a naive GraphQL implementation might do one query for the users and then ten separate queries for the comments. That can crash your system. REST is much easier to cache and much easier to rate-limit. If an agent goes rogue and starts spamming a REST endpoint, I can shut that down in a second. If an agent writes a recursive, nested GraphQL query that asks for the followers of the followers of the followers of a million users, it can melt your database before your monitoring tools even wake up.
Corn
That is a terrifying thought for anyone running a production system. It sounds like the "agentic friction" we talked about in episode ten seventy-six, where the plumbing of these systems starts to fail under the weight of the automation. If we are building these MCP servers, we are essentially creating a translator. We are taking the language of the model and translating it into the language of the API. If the API is REST, the translator has to do a lot of work to explain the context. If the API is GraphQL, the translator just has to pass through the schema.
Herman
And that brings us to the interoperability question. The Model Context Protocol uses JSON-RPC two point zero as its transport layer. It doesn't actually care if the underlying source of truth is a REST API, a GraphQL server, or a flat text file. But the quality of the "tools" you can expose to the agent depends on how well you can map those actions. In a REST world, a tool is usually a single endpoint. "Get User" is a tool. "Update Email" is a tool. It is very transactional and predictable. In a GraphQL world, you might expose a single "Query Data" tool that allows the agent to be much more creative in how it explores the information.
Corn
But is creativity actually what we want from an agent when it is interacting with our infrastructure? I feel like I would much rather have an agent that follows a predictable, RESTful path than one that is trying to be a clever data scientist with my GraphQL schema.
Herman
That is the big debate right now. If you want reliability and safety, REST is your friend. It forces the agent into a narrow corridor of behavior. You define the exact boundaries of what is possible. But if you want the agent to be a true partner in discovery—say, searching through a complex research database or a massive corporate knowledge graph—then GraphQL provides a level of flexibility that REST simply cannot match without creating hundreds of specialized endpoints. Think about a research agent trying to find correlations between weather patterns and crop yields. In REST, you would have to build a specific endpoint for every possible correlation. In GraphQL, the agent can just traverse the graph.
Corn
We touched on this in episode twelve zero nine when we talked about the "Agent-First Shift" and the "Dual-Track API Tax." Right now, most companies are building their features twice. They build a UI for the human, and then they build an API for the agent. If you use GraphQL, you can potentially merge those tracks. Your web frontend uses the same graph that your MCP server exposes to the agent. You are reducing that tax because you only have one source of truth for your data shape.
Herman
The problem is that "one source of truth" is often a mess. Most companies don't have a clean, perfectly architected GraphQL schema. They have a legacy REST API that has been cobbled together over fifteen years, with inconsistent naming conventions and weird edge cases. For those people, building an MCP server actually becomes a way to "clean up" their technical debt for the AI. You can write a wrapper that takes that messy legacy REST API and presents it to the agent as a clean, modern set of tools. You are essentially building a "sanity layer" between the agent and your legacy systems.
Corn
It is like putting a fresh coat of paint on a house that is falling apart. The agent thinks it is living in a mansion, but underneath, the MCP server is frantically translating its requests into some ancient COBOL system.
Herman
You joke, but that is the reality for most of the financial sector and big government systems. And this is where the American approach to technology really shines. We are great at building these abstraction layers that allow us to move forward without having to rebuild the entire foundation every five years. The MCP spec is a great example of that. It is a bridge. It acknowledges that the world is full of messy, disparate data sources and says, let's create a common language so we can finally use these incredible new AI models to actually do work in the real world. It doesn't matter if your data is in a twenty-year-old Oracle database or a brand new vector store; MCP gives you a way to talk to it.
Corn
Let's talk about the practical side of this for a second. If I am a developer today and I am tasked with exposing my company's data to an internal AI agent, where do I start? Do I go through the pain of setting up a GraphQL server, or do I just throw some OpenAPI docs at an MCP server and call it a day?
Herman
If your data is relatively simple and your use cases are transactional—like "reset a password" or "check an order status"—stick with REST. It is battle-tested, every developer knows how to use it, and the security model is straightforward. You can use tools that automatically turn your OpenAPI spec into an MCP server in about five minutes. But if you are building something where the agent needs to understand the relationships between different entities—like "find me all the customers who bought this product but haven't opened a support ticket in six months"—then you should seriously consider GraphQL. The ability for the agent to navigate that relationship graph without you having to pre-define every possible query is a game changer.
Corn
One thing that fascinates me is the idea that we might be seeing the end of the "human-readable" API. If the MCP server is the primary consumer, does it even matter if the underlying API is understandable to a person? Could we move to a world where APIs are just binary streams of optimized data that only agents can parse?
Herman
We are already seeing hints of that with things like Protocol Buffers and gRPC. They are much faster and more efficient than JSON, but they are impossible for a human to read without a decoder. As we move into an "agent-first" world, the pressure to optimize for machine-to-machine communication is going to outweigh the need for "pretty" JSON. But I think the "why" of the API stays the same. We will always need that contract. We will always need a way to say, this is what the system can do, and this is the limit of your authority. Even if the data is binary, the schema—the definition of the contract—must remain clear.
Corn
It is about maintaining that boundary. If we lose the API layer, we lose the ability to govern our systems. I think about this in the context of national security and critical infrastructure. We are pro-innovation, obviously, but you don't want the agent managing the power grid to be "exploring the graph" in a way that leads to an unintended state. You want that agent constrained by very strict, very specific API calls. You want the "middleman" to be a very stern bureaucrat who says "no" a lot.
Herman
That is exactly where the conservative approach to systems architecture comes in. You don't just tear down the fence because you can't see why it was put there. Those middleman layers, those "annoying" APIs, are the fences that keep our digital civilization running. They provide the structure that prevents a single bug or a single hallucination from cascading into a total system failure. The Model Context Protocol isn't about removing those fences; it is about building a better gate so the right people and the right agents can get through more easily. It is about making the gate smarter, not making the fence disappear.
Corn
So, looking ahead, do you think we are going to see a "universal agentic protocol" that eventually makes REST and GraphQL look like the telegraph?
Herman
I think we are seeing the beginning of it with MCP. The fact that it is being adopted so quickly by companies like Anthropic and the wider open-source community tells me there is a huge hunger for a standard. Whether it remains JSON-RPC or moves to something even more efficient, the core idea is here to stay. We are moving from "Application Programming Interfaces" to "Agentic Programming Interfaces." The focus is shifting from "how does a programmer use this" to "how does an autonomous system understand this." We are designing for a user that can read a million lines of documentation in a second but might still forget how to add two plus two if the prompt is wrong.
Corn
It is a shift in who the "user" is. For forty years, the user was a human at a keyboard. Now, the user is a model running in a data center. That changes everything about how we design schemas, how we handle errors, and how we think about latency. We used to optimize for human readability; now we optimize for token efficiency and semantic clarity.
Herman
And it brings us back to Daniel's question about the advantage of one over the other. The real advantage isn't in the tech stack itself, it is in the metadata. The more a protocol can tell an agent about the intent and the structure of the data, the better. GraphQL does that natively. REST requires an extra layer of documentation. But at the end of the day, an agent is only as good as the tools we give it. If we give it a direct connection to a database, we are giving it a pile of bricks. If we give it a well-designed API, we are giving it a finished building with a map and a set of keys.
Corn
I like that analogy. The API is the architecture. Without it, you just have a mess of information. And as much as I complain about looking at documentation, I would much rather spend my time defining a good interface than cleaning up the disaster that would happen if we let agents run wild in our production databases. Imagine an agent trying to "optimize" your database by deleting what it thinks are "redundant" tables.
Herman
That is the stuff of nightmares. But it is why we emphasize the "schema-first" approach. Whether you use REST or GraphQL, the schema is your defense. It is your way of saying to the AI: "This is the world you are allowed to play in. Do not step outside these lines." As we build more MCP servers, we are essentially writing the rulebooks for the next generation of digital workers.
Corn
I think the takeaway for our listeners is pretty clear. If you are building for agents, stop thinking about endpoints and start thinking about schemas. If your data is a complex web of relationships, GraphQL is going to give your agents a much smoother experience. If you need rock-solid, transactional reliability, REST is still the king. But either way, you need that middleman. You need that API layer to translate the messy reality of your data into something a machine can actually reason about.
Herman
Don't be tempted by the "no-code" or "direct-access" shortcuts. They are technical debt traps waiting to spring. The middleman isn't a bottleneck; the middleman is the translator, the security guard, and the architect all rolled into one.
Corn
Well, I think we have thoroughly dismantled the "database-as-an-API" fantasy for today. It is one of those things that sounds great on a whiteboard but is a nightmare in the data center. The history of APIs is really the history of us learning how to talk to our own machines without breaking them. From the early days of RPC to the modern era of MCP, it is all about finding the right level of abstraction.
Herman
And that is a journey that is only getting more complex and more interesting. From the first RPC calls to the massive, globally distributed graphs of today, we are constantly refining the language of the machine. It is a testament to human ingenuity that we can build these layers of abstraction that are so deep and so complex, yet they allow us to do things that would have seemed like magic only a few years ago. We are teaching the machines to understand our systems so we don't have to spend all day explaining them.
Corn
It is magic built on a foundation of very disciplined engineering. I think that is a good place to wrap this one up. We have covered the why, the how, and the what's next for the interfaces that power our world. It turns out the middleman isn't just a bottleneck; the middleman is the reason the whole thing works.
Herman
The middleman is the hero of the story, even if he is a bit of a bureaucrat.
Corn
Thanks as always to our producer Hilbert Flumingtop for keeping the gears turning behind the scenes. And a big thanks to Modal for providing the GPU credits that power this show and allow us to explore these deep technical topics. This has been My Weird Prompts. If you are enjoying these deep dives into the plumbing of the agentic age, a quick review on your podcast app really helps us reach new listeners who are trying to make sense of this rapidly changing landscape.
Herman
Find us at myweirdprompts dot com for the full archive and all the ways to subscribe. We will be back next time with more questions from Daniel and more deep dives into the weird and wonderful world of human-AI collaboration.
Corn
See you then.
Herman
Take care.

This episode was generated with AI assistance. Hosts Herman and Corn are AI personalities.