Anthropic dropped Claude Design on April seventeenth and the thing that jumped out at me immediately was how it handles the creative brief problem. Not just generating images from a prompt — actually reading your production codebase and applying your design system. That's a fundamentally different category.
Daniel sent us this one, and it's one of those topics where the timing lines up almost too neatly. He's been thinking about how the creative brief — that old agency workhorse — maps onto working with AI agents. The argument is that the best practices agencies have spent decades refining for briefing human creatives are essentially the same ones we need for getting reliable output from agents, especially on design work. And with Anthropic launching Claude Design, which turns natural language into polished visuals that actually respect your design tokens and components, the question is: what translates directly from the agency playbook, and what breaks?
Fun fact — DeepSeek V four Pro is writing our script today.
Appreciate the assist. So where do we even start with this? Because the creative brief is one of those things everybody in agencies has an opinion about, but almost nobody outside that world thinks about as a format.
It's a format that works. I pulled up Adobe's guide on this from last year — they define it as a structured document covering scope, goals, target audience, key messages, deliverables, timeline, and constraints. The phrase they use is it "serves as a roadmap for creative teams, providing clear instructions and guidelines to ensure everyone is on the same page." What's interesting is that the best creative briefs are concise — often one page — and they focus on the why, not the features.
Which is exactly the opposite of what most people do when they prompt an AI. They dump feature lists. "Make it blue, add a hero image, put the CTA top right, use rounded corners." That's not a brief, that's a spec sheet.
Right, and agencies have learned the hard way that overly prescriptive briefs kill creative output. The Adobe guide explicitly says the brief should provide "a framework and guidelines for the creative team, allowing them to explore ideas and develop concepts within those parameters." You're giving direction, not dictating specific creative concepts.
Here's where I think there's a genuine tension worth poking at. The conventional wisdom in prompt engineering — and I've seen this everywhere — is to be extremely specific. "You are a senior brand designer with fifteen years of experience. Your design philosophy emphasizes whitespace and typographic hierarchy." That level of specificity is treated as best practice. But the agency world warns against exactly that kind of over-constraint.
I don't think it's actually a contradiction, but I see why it looks like one. The specificity in a good prompt isn't about dictating the creative output — it's about defining the context and constraints. "You are a senior brand designer" is context. "The brand guidelines require Inter as the typeface and a specific blue hex code" is a constraint. Neither of those is telling the agent what the layout should look like. That's the distinction agencies make too — constraints versus prescriptions.
The parallel holds if you're careful about what you're being specific about. Don't prescribe the solution, prescribe the problem and the boundaries. That's actually harder to do well than just listing requirements.
There's another layer here that agencies figured out years ago and almost nobody applies to AI agents. They use a tiering system. Tier one is highly conceptual, non-standard work requiring a full brief. Tier two is execution of established work — you need moderate detail but not a full strategic document. Tier three is edits and templated work — a project description is enough.
Right now most people treat every AI interaction as tier one. They write these elaborate system prompts for tasks that should be tier three. "Resize this image for Instagram" does not need a paragraph about design philosophy.
MindStudio published a guide on this in February and one of their key points is that a common mistake is overloading single prompts — trying to make one prompt do too many things. Their fix is to break complex tasks into multiple prompts with clear handoffs between them. That mirrors the agency practice of having separate briefs for different phases.
Let's make this concrete. If I'm working with Claude Design on a new landing page, what would a tier one brief actually look like? What are the elements that translate directly from the agency template?
The agency template typically has six or seven core components and they map almost one-to-one. Project overview and objective — what are we doing and why. Target audience — who is this for, what do they care about. Key messages and tone — what should the work communicate and how should it feel. Deliverables and format — what are we actually producing. Constraints — budget, timeline, brand guidelines, technical limitations. And ideally examples or references — here's work that captures the direction we want.
Every single one of those has a direct analog in a well-structured agent prompt. The project overview is your task description. Audience is context about who the output serves. Key messages and tone map to voice and style guidance. Deliverables map to output format specification. Constraints map to, well, constraints — "do not use red," "stay within the existing component library." And examples are few-shot prompting.
What's missing from most AI prompts that the agency brief includes is the why. The strategic objective. Agencies spend a lot of time on "what are we trying to achieve here" because it lets the creative team make judgment calls. If the brief says "we need to increase sign-ups among developers who currently think the product is too complex," the designer knows to prioritize clarity and simplicity over visual flash. If you just say "design a sign-up page," the agent has no framework for making trade-offs.
This connects to something Daniel mentioned — he wrote about creative briefs long before generative AI went mainstream, so he's been thinking about this as a format problem, not a technology problem. The format does the work of aligning intent with execution.
The format is transportable. That's the key insight. A good creative brief works whether the recipient is a junior designer, a senior creative director, or an AI agent. The structure does the same thing in each case — it reduces ambiguity, aligns on objectives, and provides guardrails without over-constraining.
Let's talk about what Claude Design specifically changes here. Because this isn't just DALL-E with better prompting. The launch details are worth walking through.
Claude Design launched April seventeenth as the first public product from Anthropic Labs. It's powered by Claude Opus four point seven, which came out the day before. What sets it apart is that it can read a team's actual production codebase — React, Vue, Svelte components, design tokens, Tailwind config — and apply the design system automatically. Figma AI can't do that. Canva Magic Studio can't do that.
That's the part that made me sit up. It's not generating a pretty picture that then has to be rebuilt by engineers. It's generating output that respects the actual design system the team already uses.
Digital Applied had a good analysis of this on April nineteenth. Their point was that this closes a gap that's plagued agencies for decades — the handoff from what the brief asked for to what the engineers can actually build. You get a mockup that looks great, then engineering says "we can't build that with our component library" and you go through three more rounds.
If the brief is bad, the consequences are now worse — because a bad brief doesn't just produce a bad mockup anymore, it produces bad code. Or at least code that doesn't align with what was actually needed. The briefing process becomes more consequential, not less.
There's a quote from the launch announcement that I think captures this. Datadog's product manager Aneesh Kethini said what used to take a week of back-and-forth between briefs, mockups, and review rounds now happens in a single conversation. That's not just faster — it changes the nature of the briefing process itself.
This is where the agency best practice of collaborative briefing becomes really interesting. Agencies will tell you the worst thing you can do is send a blank brief to a client and have them fill it out alone. The Adobe guide explicitly says the thing you want to avoid is sending a document to the client to fill out on their own. The brief should be filled out collaboratively in a meeting.
Claude Design's interface is essentially that collaborative briefing session made real-time. You've got a dual-pane setup — a chat panel for broad instructions and a live canvas where you see output immediately, with inline comments, direct text edits, and adjustment sliders for spacing, color, and layout. You're not writing a document and waiting three days for comps. You're having a conversation and seeing results as you refine.
The "prompt" for a design agent might not actually be a one-shot document at all. It might be better thought of as a structured conversation. Which is what a good briefing meeting already is.
And that has implications for how we think about prompt engineering. The current paradigm is largely "craft the perfect prompt, send it, evaluate the output, iterate." That's a waterfall model. The collaborative briefing approach is more agile — you start with a direction, see what comes back, and refine together.
I want to poke at something here. The agency world has this concept of the brief as a fixed document — once it's signed off, that's the brief, and the creative work is measured against it. It prevents scope creep and gives everyone a reference point for whether the work is on target. If the briefing becomes an ongoing conversation, do you lose that anchoring function?
I think you keep the brief as a living document but you don't pretend the first draft is final. The agency parallel would be the brief evolving during the collaborative session before it gets locked. What's different with Claude Design is that the "locking" happens when you're satisfied with the output, not before the creative work begins. But you still need a clear initial direction.
Let's talk about what breaks when people try to apply agency briefing practices to AI agents without adapting them. Because I suspect there are failure modes that aren't obvious.
The biggest one I see is context overload. An agency brief assumes the creative team has years of implicit knowledge — they understand cultural references, they know what "premium but approachable" means in their market, they've absorbed brand strategy through osmosis. An AI agent has none of that implicit context. So you have to externalize more of it.
Which creates a paradox. The agency best practice is to keep briefs concise — one page. But if the AI needs more explicit context, you're tempted to write three pages. And then you've violated the concision principle that makes briefs effective in the first place.
The way out of that, I think, is what MindStudio calls memory anchoring. You explicitly tell the agent what to remember for future interactions. So you don't put everything in one prompt — you establish context that persists across the conversation. "Remember that our brand voice is direct and unpretentious. Remember that our primary audience is technical founders who distrust marketing." That upfront investment pays off because subsequent prompts can be concise.
It's like the difference between briefing a new freelancer versus briefing someone who's been on the account for two years. With the freelancer, you need more context. With the experienced person, you can say "same brand, new product, go." The memory anchoring is building that experienced relationship with the agent.
There's another adaptation that matters. Agency briefs often include competitive context — "here's what Competitor X is doing, we need to differentiate on Y." With an AI agent, you need to be more explicit about what "differentiate" means operationally. "Do not use the dark-mode-with-gradient-hero pattern that every SaaS company uses" is more actionable than "be distinctive.
That's a specific example of a general principle — abstract direction needs to be translated into concrete constraints when you're working with an agent. Humans can interpret "be distinctive" through shared cultural knowledge. Agents need the operational definition.
Let me bring in a data point that I think illustrates how much the briefing quality matters. Brilliant's senior product designer Olivia Xu was quoted in the launch announcement saying their most complex pages, which took twenty-plus prompts to recreate in other tools, only required two prompts in Claude Design.
Two prompts versus twenty. That's not a tool difference, that's a briefing paradigm difference. The other tools required iterative correction because the initial prompt wasn't structured as a proper brief. Claude Design, presumably, was able to work from a more complete initial direction.
It can pull from the codebase, so a lot of the constraint specification that would normally take paragraphs — "use our button component, our spacing scale, our color tokens" — is handled automatically. You don't brief the design system because the tool already knows it.
That frees up the brief to focus on what actually matters — the strategic and creative direction. Which is exactly what agencies say a good brief should do. Don't spend words on things the creative team already knows.
Canva being a launch partner is also worth noting here. Canva CEO Melanie Perkins said they're excited to make it seamless for people to bring ideas and drafts from Claude Design into Canva as fully editable files. So the output isn't locked into Anthropic's ecosystem — it flows into the tool where non-designers already do their polish work.
Which means the briefing conversation happens in Claude Design, the output goes to Canva for team collaboration and final tweaks. That's a workflow that maps pretty cleanly onto how agencies already operate — strategy and initial concepts in one environment, production and collaboration in another.
Let's step back and talk about what this means for the skill of briefing itself. For decades, writing a good creative brief was a specialized skill. Creative directors and strategists got paid well for it because it's genuinely hard to distill a business problem, audience insight, and strategic direction into one page that inspires rather than constrains.
Now that skill is democratizing in an interesting way. Anyone who can structure their thinking clearly can produce good creative output through these tools. But the skill itself doesn't become less valuable — if anything, it becomes more valuable because more people are doing creative work.
The people who will get the best results from Claude Design aren't necessarily professional designers. They're people who think clearly about objectives, audiences, and constraints — regardless of their job title.
That's the optimistic read. The pessimistic read is that most people are terrible at writing briefs, and giving them a tool that turns bad briefs directly into production code is going to produce a lot of bad output very quickly.
Which is why the tiering system matters so much. If you're doing tier three work — resizing assets, applying templates, making small edits — a minimal prompt is fine and the tool handles it. If you're doing tier one work — new brand identity, campaign concept, product launch page — you need to invest in the briefing process. The tool doesn't change that. It just makes the consequences of good or bad briefing more immediate.
There's something else here about the review process. In agencies, the brief is separate from the creative review. The brief says "here's what we need and why." The creative review says "does this work meet the brief." When you collapse briefing and creation into a single conversation, you risk losing that separation.
That's a real concern. The fix, I think, is to treat the initial prompt or conversation as the brief, then step back and evaluate the output against it explicitly. Don't just iterate intuitively — actually check whether the output addresses the objectives you set. It's a discipline thing, not a tool thing.
We've talked about what maps directly from agency practice to AI agents. What about what doesn't map? What are the things agencies do that simply don't apply?
The biggest one is emotional management. A significant part of briefing human creatives is motivation and buy-in. You want them excited about the project because excited creatives do better work. That whole dimension is irrelevant with AI agents. You don't need to sell the vision, you just need to specify it clearly.
On the flip side, human creatives are good at pushing back on bad briefs. If you give a senior designer a brief that makes no strategic sense, they'll tell you. An AI agent will just execute the bad brief faithfully.
That's a crucial asymmetry. The agency brief process has an implicit quality check built in — the creative team reads it and flags issues. With AI agents, the quality check has to be explicit and intentional. You have to build in a review step that a human would do automatically.
Which suggests that the workflow should include something like "have the agent summarize its understanding of the brief before executing." That's a practice some prompt engineers already use — ask the model to restate the task in its own words to catch misalignments.
MindStudio's guide mentions error handling and fallbacks as a key element of an agent creative brief. That's not something that appears in traditional agency briefs because you assume the human creative will handle edge cases sensibly. With agents, you need to specify what happens when something goes wrong — "if the requested layout doesn't work with the content length, prioritize readability over the layout specification.
Let's talk about the tiering system in more detail because I think it's the most immediately practical thing listeners can apply. What does a tiered briefing system for AI agents actually look like in practice?
Tier three is the easy one — it's edits and templated work. "Change the headline on this page to X." "Generate a social media version of this hero image at these dimensions." "Apply our dark mode color scheme to this component." These need almost no strategic context. A sentence or two is enough.
Tier two is execution of established work. You're working within an existing brand system, campaign concept, or design language. The brief needs to specify which established system to use, what the specific deliverable is, and any constraints unique to this execution. "Create a landing page for the new feature using our standard product page template, emphasizing the performance improvement data. Target audience is existing customers who haven't upgraded.
Tier one is where the full brief comes in. New brand work, new campaign concepts, anything that requires strategic thinking rather than execution against an existing framework. This is where you need audience research, competitive context, brand strategy, tone guidance, and clear objectives. The brief should be substantial — maybe not one page in the agency sense, but a structured document that gives the agent everything it needs to make good creative decisions.
Most people are doing tier one briefs for tier three tasks. That's inefficient in one direction. But some people are also doing tier three briefs for tier one tasks — asking for a new brand identity with "make it look modern and clean" — and then being disappointed with the results.
The tiering should affect not just the length of the brief but the structure. A tier three brief might just be a task description and output format. A tier one brief should include sections for objectives, audience, competitive context, tone, constraints, and examples. The structure scales with the complexity of the task.
I want to circle back to something about Claude Design specifically. The codebase integration means that for the first time, the brief can produce output that's not just visually on-brand but technically on-spec. That changes what "on brief" means.
In the old world, "on brief" meant the creative concept aligned with the strategic direction. Technical feasibility was a separate conversation. Now those are collapsed. If the brief says "design a checkout flow that reduces friction" and your design system already has form components, validation patterns, and a progress indicator, the output will use those automatically. The brief doesn't need to specify them.
Which means the brief can be more strategic because the tactical details are handled by the system. That's exactly what agencies have always wanted — briefs that focus on the why, not the what.
There's a quote from the Adobe guide that I keep coming back to. The best briefs provide "a framework and guidelines for the creative team, allowing them to explore ideas and develop concepts within those parameters." Claude Design essentially makes the entire design system into implicit parameters, so the explicit brief can focus entirely on the framework and guidelines.
If I'm a listener who wants to get better at this starting today, what's the one thing I should change about how I prompt design agents?
Start with why, not what. Before you describe the output you want, articulate the objective and the audience. "We need to increase trial sign-ups among developers who visited our docs but didn't convert" is a better starting point than "design a landing page with a hero, three feature columns, and a CTA.
The second thing?
Separate constraints from direction. Constraints are non-negotiable boundaries — brand colors, component library, legal requirements. Direction is strategic guidance — the feeling you want, the problem you're solving, the audience you're reaching. Put constraints in one section, direction in another. Don't mix them.
That's actually a really clean heuristic. Most bad prompts I see are a jumble of constraints, direction, feature requests, and formatting instructions all in one paragraph.
The third thing is tier your briefs consciously. Before you write anything, ask yourself: is this tier one, two, or three? If it's tier three, write two sentences and move on. If it's tier one, invest the time to write a proper structured brief with objectives, audience, tone, constraints, and examples.
The fourth thing, which I think is underrated: include what not to do. Agencies do this all the time — "we don't want to look like a bank," "avoid the typical SaaS illustration style." Negative constraints are often more useful than positive ones because they rule out the obvious clichés.
That's particularly important with AI agents because they tend toward the median of their training data. If you don't tell them to avoid the typical patterns, you'll get the typical patterns.
Now: Hilbert's daily fun fact.
The first documented use of a creative brief in advertising dates back to the nineteen forties at the agency N. Ayer and Son, though the format wasn't standardized across the industry until the nineteen seventies when the "copy platform" evolved into what we now recognize as the modern creative brief.
If a listener wants to put this into practice, I'd say start by auditing how you currently prompt design tools. Are you writing briefs or are you writing spec sheets? If you're writing spec sheets, try rewriting one as a brief — objective first, audience second, constraints third, examples last. See if the output improves.
If you're using Claude Design specifically, take advantage of the conversational interface. Don't try to nail everything in the first prompt. Start with the strategic direction, see what comes back, then refine through the chat. Treat it like a briefing meeting, not a vending machine.
The format does the work. That's the throughline here. Whether it's a one-page agency brief or a structured agent prompt, the quality of the output is downstream of the quality of the thinking you put into the direction. The tools are getting better at execution. The briefing skill is what compounds.
One open question I'm still chewing on: as these tools get better at inferring intent from minimal direction, does the need for structured briefs decrease? Or does it increase because the upside of good briefing gets even larger?
I suspect it bifurcates. For quick, low-stakes work, minimal prompting will be fine and briefs will feel like overhead. For anything strategic or brand-defining, the brief becomes more important because the tool can do more with good direction. The gap between good and bad briefing gets wider.
Thanks to Hilbert Flumingtop for producing, as always.
This has been My Weird Prompts. If you want more episodes, we're at myweirdprompts.com and wherever you get your podcasts.
See you next time.