.png)
A discussion guide is a structured conversation plan with four stages: warm-up (build trust), core questions (primary research objectives), probing (extract depth), and wrap-up (confirm understanding). The guide should flow logically, allow room for tangents, and stay flexible.
Unlike a question list, a guide maps the conversation's emotional arc and transition points. Good guides feel natural even though they're methodically planned. Use warm-up questions to establish rapport before diving into sensitive topics. Order core questions from general to specific.
Have a suite of probes ready (tell me more, what did that feel like, can you walk me through).
Know your research objective inside and out so you can deviate confidently. If you're recruiting participants, our participant recruitment features can help you find the right people and set expectations early.
Most user interviews feel fine while they're happening. You're talking, your participant is talking, everyone's nodding. You stop the recording and think: that went pretty well.
Then you sit down to write up your notes and realize you don't know what you learned.
You spent twenty minutes on background and barely touched your core research questions. You probed one frustration into the ground and skipped three others entirely. You have five transcripts and nothing that lines up across them -- different questions, different depth, no way to compare what you heard. The data is there, somewhere, but it doesn't hold together.
This is what happens when you build a question list instead of a discussion guide.
The difference isn't complexity. A discussion guide can be one page. The difference is that a guide is a conversation architecture. It accounts for the emotional arc of an interview, plans for how questions should be ordered, and gives you enough structure that when a participant says something unexpected, you can follow it and still find your way back.
This post covers what a real discussion guide does, what you get from having one, and how to build it.
Researchers who use discussion guides consistently get more usable data. That's the practical case. But the benefits are specific enough to be worth naming.
You can actually listen. When you're improvising questions on the fly, half your attention is on what to ask next. A guide takes that off your plate. You know what you've covered and what comes next, so during the actual conversation you can be fully present with the participant. You'll catch things you'd otherwise miss.
You can follow tangents without losing your place. This is underrated. When a participant brings up something unexpected and interesting, researchers without a guide often ignore it because they're not sure they'll get back on track. With a guide, you know exactly where you are. You can go down a rabbit hole and return to the structure when you're ready.
Your data is comparable across sessions. If you interviewed five people and asked them different questions in different orders, you can't really compare what they said. A guide creates consistency -- not word-for-word scripted questions, but a consistent structure and similar core questions across every session. That's what makes patterns visible.
You end interviews with what you came for. Without a guide, it's easy to reach the end of a session and realize you never got to your most important question. A guide with time allocation prevents that. You know where you are in the conversation and roughly how much time is left.
You feel more confident, which makes you better. This sounds soft but it's real. When you're confident in the structure, you're less tense, more conversational, and better at reading the room. Participants can tell.
You already know interviews are better than surveys. What you might not know is that bad interviews can be worse than no research at all.
Without a guide, you get:
A good discussion guide prevents all of this. It's not a script you recite word-for-word. It's a structure you follow so you can be fully present with the participant.
When you have a guide, you know where you are in the conversation. You know what you've covered and what comes next. You can listen instead of plan. You can follow interesting tangents because you're confident you'll get back on track.
The best researchers use guides because it lets them be more human, not less.
A question list is flat. You write down five to ten questions, usually in the order they occurred to you, and work through them. The participant answers, you move on.
A discussion guide has structure. It accounts for the fact that conversations have rhythm -- that you can't ask about sensitive topics before you've built some trust, and that the order of questions shapes the quality of answers you get.
A real discussion guide includes:
An explicit objective. Not "learn about their workflow" but "understand the decision criteria when they choose a design tool." The more specific your objective, the better you'll probe when something interesting comes up.
Warm-up questions. Short, personal questions designed to get the participant talking before you ask anything important. Not data. Relationship-building.
Core questions. The questions you're actually trying to answer, ordered from broad to specific.
Probing techniques. Not a fixed list but a menu you'll draw from based on what the participant says.
Stage transitions. Where do you shift from warm-up to core? These moments need intention, or you'll just stay in warm-up too long.
Time allocation. How much time on warm-up vs. core vs. probing? This forces you to prioritize before the interview starts.
Closing ritual. The same wrap-up for every participant so you're not improvising the end and potentially missing something.
A question list is missing most of this. That's the gap.
If you're running moderated interviews, our AI-moderated interview tool can handle parts of this structure automatically. But understanding the structure yourself matters, because you'll know when to override what it suggests.
Here's the architecture that researchers use most often. It works because it respects conversation dynamics.
Your job here is to build safety and get the participant talking.
These questions are not analyzed. They're not data. They're relationship.
Warm-up questions should be:
Examples:
"Tell me a bit about your role and how long you've been doing it."
"What does a typical day look like for you?"
"What brought you to [company/industry]?"
"When you're not working, what do you like to do?"
The goal is simple: they should feel comfortable talking by the time you hit your first real question.
Common mistakes: asking warm-up questions that are actually research questions in disguise. "How long have you used our product?" is not warm-up. It's data collection. Save it for later.
These are your primary research objectives.
Order them from broad to specific. This matters because specificity can anchor answers. If you ask "what do you think about design tools" before "what's your current tool," you prime them to evaluate tools. You want general impressions first.
Core questions should:
Examples from different research contexts:
For a design tool study:
For an onboarding study:
For a feature discovery study:
Your core questions should be four to six questions. Not ten. You won't get to ten. You'll probe, explore, and go sideways.
Probing is where you extract depth.
Most researchers don't probe enough. They're afraid of silence or think one answer is enough. It's not.
Probes are not new questions. They're follow-ups that dig into what they already said.
Have a menu of probes you can draw from:
The most powerful probe is often silence. Let them fill it.
Don't write all your probes in advance. Write your core questions and a few probe starters. Then improvise based on what you hear.
Same ritual every time.
This ensures you don't miss anything.
Wrap-up questions:
The wrap-up also gives you a chance to fact-check. "So if I understand correctly, you're currently using X and want to be able to do Y. Is that right?"
Start with your research objective. Write it down and make it specific. "Understand why teams abandon our workflow templates after the first week" is useful. "Learn about workflows" is not. Every question you write should trace back to that objective.
Then write your warm-up questions. Three to five. Personal, easy, genuine.
Next, your core questions. Start general, go specific. For each one, note why you're asking it -- which part of the research objective it connects to. If you can't answer that, cut the question.
Then write your probes. Not all possible probes, just the patterns you'll want to follow. "If they mention frustration, ask: what was the impact, how often does that happen, what would fix it."
Add time allocation. If you have 45 minutes:
Then test it. Run one interview with the draft guide. You'll immediately see what works and what doesn't. Most guides need adjustment after the first session -- that's expected.
Here's a template you can adapt.
Research objective: Understand how customer success teams decide whether to expand their use of our platform to new departments.
Participant: An existing customer with 20+ users, currently using one product line.
Interview length: 45 minutes
Warm-up (5 minutes)
"Tell me a bit about your role and how long you've been at [company]."
"What does a typical day look like for you?"
"What got you interested in joining [company]?"
Core questions (25 minutes)
"Walk me through how you originally adopted our platform. What was that process like?"
Probe: Tell me more about. Why that approach? What did you hope would happen?
"Since you've had it in place, what's been the biggest win?"
Probe: How did that impact your team? What changed because of that?
"I'm curious about the other departments at your company. Have you thought about expanding our platform to any of them?"
Probe: What's stopping you? What would need to happen first?
"If you were going to pitch this to another department, what would you tell them?"
Probe: Why those points specifically? What matters most to them?
Probing on priority topics (10 minutes)
If they haven't mentioned budget constraints, ask: "How do budget cycles affect these kinds of decisions?"
If they haven't mentioned internal champions, ask: "Do you need other people sold on this before you can move forward?"
If they haven't mentioned competitors, ask: "Are other solutions part of the conversation, or is it more about whether to expand at all?"
Wrap-up (5 minutes)
"Is there anything else you think I should know?"
"What's one thing you'd want me to remember from this conversation?"
"So if I understand correctly, the main barrier to expansion is [summary]. Is that accurate?"
This structure takes 45 minutes in real time. It feels natural. It gets data. It allows for tangents.
Writing leading questions. "Don't you find our tool makes you more productive?" You've already decided the answer. Ask instead: "How would you describe the impact of the tool on your work?"
Asking multiple questions at once. "How's your workflow, and what tools are you using, and what frustrates you?" Pick one. Wait for the answer. Then ask the next.
Jumping into sensitive topics without rapport. If you need to ask about budget, failure, frustration, or change, you need to have had five minutes of conversation first. It matters.
Not probing enough. "Yes" is not an answer. "I like it" is not data. Probe until you understand why. What specifically? What happened next? What was the impact?
Treating the guide as a script. If you're reading questions verbatim, you're not listening. Use the guide as a frame. Let the conversation flow.
Not pilot testing. Run one interview with your draft guide. It will reveal what doesn't work.
Ignoring interesting tangents. If a participant brings up something unexpected that connects to your research objective, follow it. That's where insights live.
If you're conducting a large research study with many participants, consistency matters more. Our AI repository can help you organize data and spot patterns across interviews once you've collected them.
You're three interviews in and a question isn't working. Change it.
The guide is yours. The point of having one is that you're confident enough to modify it when needed.
Be intentional about changes, though. Ask yourself: is this question broken because it's poorly worded, or because my research objective was unclear? Have I asked it in enough interviews to know it's not working? If I change it, can I still compare data across sessions?
Most guides stabilize after two or three interviews.
The four-stage structure works for moderated interviews. For other methods:
Unmoderated interviews. You can't probe in real time, so you need more detailed written questions. Use the same structure but add follow-up prompts in writing. "Please describe what that process looked like" can be followed by "What was challenging about that step?"
User testing sessions. Same structure: warm-up, task-based core questions, probing after each task, wrap-up. The probing happens per task, not all at the end.
Focus groups. The guide matters more in a group because you're managing discussion across multiple people. Anticipate where tangents might pull the group apart and plan your transitions carefully.
Surveys. A discussion guide doesn't apply directly, but the same logic holds: easy questions first, sensitive questions later, open-ended questions before rating scales.
A well-built discussion guide will also inform how you design your survey questions or your unmoderated study. The architecture carries across methods.
Our AI study creator can generate a discussion guide from your research objective. It's genuinely useful -- it saves time and catches common structural mistakes.
But here's what it can't do: understand the specific context of your research the way you do.
The AI doesn't know that three departments tried to adopt the platform and failed. It doesn't know your CEO is skeptical, so you need participants to articulate ROI in their own words. It doesn't know the history.
That's why understanding the structure matters. When AI generates a guide and something feels off, you'll know how to fix it. You'll know why warm-up matters even when it's not "data." You'll know that the order of core questions shapes answers. You'll know when a probe is better than moving to the next question.
Use AI to build the scaffolding fast, then use your own judgment on the details.
If multiple people are running interviews from the same guide, detail matters more than it does when you're working alone.
Each person interprets open-ended instructions differently. One researcher spends 15 minutes on warm-up, another spends three. One probes deeply on tangents, another sticks rigidly to the outline. The data becomes hard to compare.
If that's your situation:
If you're recruiting participants for a larger study, our participant recruitment features can help confirm participants understand what they're signing up for before the sessions begin.
Q: How long should a discussion guide be?
Usually one to three pages. Long enough to cover the detail you need, short enough that you can reference it mid-conversation without losing your place. The goal is to see your full conversation architecture without scrolling.
Q: Should I memorize my discussion guide?
No. But you should know it well enough that you're not reading from it during the interview. Glance between questions. Don't read while the participant is talking.
Q: What if I'm only doing a few interviews?
A guide matters more with a small sample, not less. With fewer sessions, consistency is everything. It ensures you're not comparing interviews where you asked completely different things.
Q: Can I use the same guide for different participant types?
Mostly yes, but adjust the warm-up and framing. The research objective stays the same. If you're interviewing both current customers and churned accounts, the warm-up and session framing will differ even if your core questions overlap.
Q: How do I know if my questions are too leading?
Read them out loud. Does the question already contain an answer? Could a reasonable participant give you the opposite response? If not, it's leading. Rewrite it as an open invitation to describe their experience rather than confirm yours.
Q: What if a participant goes off on a really interesting tangent?
Follow it if it connects to your research objective. You can always return to the guide. Tangents are often where the real insights are -- this is one reason a guide is a frame rather than a script.
Q: How do I handle interviews that run long?
Cut probing before you cut core questions. You need answers to your core questions; you can live without exhaustive follow-up on everything. Mark which core questions are essential before the interview starts so you know what you can't skip.
A discussion guide gives you something a question list can't: the ability to be fully present in the conversation. You're not planning your next question while someone is talking. You're not scrambling to get back on track after a tangent. You're not leaving sessions wondering what you actually learned.
The best guides are simple: warm-up, core questions, probes, wrap-up. They get tested after the first interview and adjusted. They're followed with flexibility, not read like a script.
If you're building guides for a team, make them detailed enough that anyone can pick one up and run a consistent session. If you're doing this alone, build one anyway. Even a single interview is worth doing right.
When you're ready to organize what you've learned, our AI repository can help you tag insights, spot patterns, and share findings with your team. But first, get the conversation right.
Tania Clarke is a B2B SaaS product marketer focused on using customer research and market insight to shape positioning, messaging, and go-to-market strategy.