I've been running brand workshops for about 15 years. The format has been more or less the same the whole time: get the stakeholders in a room, run exercises that pull out how they see themselves versus how the world sees them, take a mountain of notes, then go home and spend days turning all of it into a strategic brief. It's a good process. I never had a reason to overhaul it.

What I did have was a recurring annoyance. The workshop itself would be four hours, maybe six. The synthesis afterward, reviewing notes, cross-referencing whiteboard photos with audio recordings, trying to reconstruct what someone actually meant when they said that one thing in hour three, that part would take two or three times as long. And it's not creative work. It's assembly. Important assembly, but still.

So about 18 months ago I started experimenting with running LLM tools alongside the workshop. I want to be clear about what I mean by that, because "AI-assisted workshop" can sound like I'm handing off the thinking. I'm not. The facilitation, the read on the room, the strategic judgment, all still me. What changed is the note-taking and first-pass synthesis happening in parallel instead of after.

What the new format looks like

I run a real-time transcription tool during the session. As we go, I'm feeding chunks of the transcript into an LLM with prompts I've written to pull specific things: recurring phrases people keep reaching for, moments where two stakeholders said contradictory things without noticing, shifts in energy when a topic hits a nerve.

The CEO says something offhand about the company's origin and the VP of Marketing's expression changes, because they just realized they've been telling completely different stories externally. That moment is why you run workshops. No tool is going to create that. But a tool can make sure I don't lose the thread of what was said right before and right after, so I can do something with it later.

By the time the session wraps, I've got a rough synthesis that covers maybe 70 percent of what used to take me days. It's not a deliverable. It's a structured starting point that I then tear apart, argue with, and rewrite. But the difference between starting from a structured draft and starting from a pile of Post-it photos is significant.

The exercises that changed

Not everything maps onto this approach equally. Some exercises got dramatically better. Others I still run the old way.

Competitive language analysis

The traditional version of this exercise always frustrated me a little. You ask stakeholders to look at competitor websites and talk about what they notice, and the conversation gravitates toward visual design. "Their site feels modern." "This one looks dated." That's fine but it's not what I need from them. I need to understand the messaging landscape.

Now I do the language analysis before the workshop. I pull competitor copy, run it through an LLM, and walk in with a map of how the entire category talks about itself: the phrases everyone shares, the value props that are basically interchangeable, the patterns in tone. When I put that in front of the room, the conversation starts somewhere useful. Instead of "their website looks corporate," people start asking "if every company in this space leads with trust and reliability, what do we actually have that's different?" That's a much better starting point.

Brand voice exploration

This is the one that changed the most, and honestly the one I'm most excited about.

Brand voice exercises used to be pretty abstract. Describe the brand as a person. Pick adjectives from a list. You'd get useful input but it would live in this conceptual space that was hard for non-marketers to evaluate. Is "confident but approachable" actually different from "authoritative but warm"? In a conference room, those feel like the same thing.

What I do now: participants go through the personality exercise, give me their adjectives and descriptions, and I feed all of it into the LLM right there in the room. Within a few minutes, I'm reading back sample copy written in three or four different voice directions based on what they just told me. The same homepage intro, the same email, the same social post, but in their words, filtered through different interpretations of what they said they wanted.

The room goes quiet in a different way when that happens. People lean forward. They hear their ideas come back as actual language and suddenly they have opinions they didn't have five minutes ago. "That one, that sounds like us. The second one is too formal." You skip past hours of abstract discussion and get to the conversation that matters: does this sound right or not?

Stakeholder alignment mapping

I used to discover stakeholder misalignment days later, sitting at my desk reviewing notes, and by then it was too late to do anything about it in the room. Now the running transcript flags it in near real time. The COO describes the primary audience as enterprise buyers. Twenty minutes later the CMO says mid-market. I can bring that up while they're both sitting at the same table. That conversation is uncomfortable, but it's the conversation that needs to happen, and it's infinitely better to have it during the workshop than to write it up in a brief and hope they sort it out.

What surprised me

I expected this to be an efficiency play. Get the synthesis done faster, shorten the turnaround, move on to the next project. And it is that. I've cut the post-workshop processing time by roughly 60 percent, which matters when you're a solo practice juggling multiple clients.

What I didn't expect is that the workshops themselves got better. When I'm not frantically scribbling notes and trying to capture every important statement, I can actually pay attention. I notice when someone's body language doesn't match what they're saying. I can let a silence sit longer instead of rushing to fill it because I'm worried about losing the thread. I follow up on the odd comment that made someone else in the room shift in their chair. The documentation is running in the background, so I can do what I'm actually there to do: facilitate.

The participant reaction surprised me too. I thought people would be weird about it. They're not. Especially when the voice exploration happens, when they hear sample copy generated from their own input, in real time, in the same meeting, there's a visible shift in engagement. Strategy stops feeling theoretical. They can hear it.

Where the tools fall short

The LLM processes what was said. It does not process what was meant. Those are very different things in a room full of stakeholders with competing priorities and organizational politics to navigate.

When a founder says "we're open to repositioning" in a tone that makes it clear they are not, in fact, open to repositioning, a transcript doesn't capture that. When someone gives a diplomatic non-answer to a direct question and the room gets slightly tense, that tension doesn't show up in the text. Reading those moments is still entirely my job, and honestly, it's often where the most important insights live.

The other thing, and this is the one I think about most, is that the models want to resolve tension. Every time. I'll feed in two stakeholders' clearly divergent descriptions of the brand's audience, and the synthesis will try to weave them into some coherent narrative where both people are kind of right. It'll find the common ground, paper over the contradiction, present something that sounds reasonable.

Sometimes that's fine. Sometimes the positions really are complementary and they just needed to be framed better. But sometimes the tension is the whole point. Sometimes what the client needs to hear is: you have a fundamental disagreement about who this brand is for, and no amount of design work is going to fix that. You need to make a decision. The LLM will never tell them that. It will always try to find harmony, even when harmony hasn't been earned. Recognizing when to override that instinct, when the disagreement is the deliverable, that's the part of this work that can't be automated.

I've also found that prompt quality makes or breaks the whole thing. A mediocre prompt gets you a mediocre summary that sounds plausible but misses everything interesting. Writing prompts that actually extract useful signal from workshop transcripts took me months of iteration, and it draws directly on understanding the material well enough to know what to ask for. There's no shortcut there.

What I'd tell someone starting this

Don't try to rebuild your entire workshop process at once. Find the one part that eats the most time for the least creative payoff and start there. For me it was post-session synthesis. For someone else it might be competitive research or interview summaries or first-draft copy.

And be honest about whether you can evaluate the output. The model gives you a starting point. If you can't tell when it's right, when it's close but slightly off, and when it's completely wrong, you're not ready. Go do the work manually for a while longer. The tool is useful precisely because I've done this enough times to know what good looks like, not as a replacement for knowing.

The human stuff is still the human stuff. A founder getting emotional about why they started the company. Two co-founders realizing mid-sentence that they've been building toward different things. The long pause after someone admits out loud that what the brand says and what the brand does aren't the same. I'm not interested in optimizing any of that. I just want to make sure I'm present enough to catch it when it happens.