A practical framework for running customer discovery as a PM — without needing a dedicated researcher for every question.

During my time as a senior product marketer at Atlassian, I spent a lot of time talking to customers. Or trying to.
Customer discovery: the structured practice of learning what customers actually need before you build, is supposed to be foundational for product teams. In reality, getting access to speak to customers, let alone the right ones was incredibly difficult.
Our main channel for finding research participants was the community hub. Which sounds great until you realise who actually hangs out in community hubs: the superfans and the frustrated. This left a huge gap in the middle.
You'd run a discovery study thinking you had a decent cross-section of your user base, and you'd end up with feedback skewed to the extremes. It wasn't entirely useless, but it didn't provide the full picture either.
The other thing that slowed us down: nobody could tell you what had already been researched. There was no central repository for the product teams, which meant no shared memory of customer knowledge. Every new question or quarterly roadmap discovery triggered new research because it was easier to start fresh than to dig through old Google Drive folders trying to find whether someone had already looked into it.
I didn't fully appreciate how much this cost us until later. We were duplicating effort constantly.
If either of those sound familiar, you're not alone. Most product teams are dealing with the same two gaps: reaching the right customers, and building on what you've already learned. This guide covers how to close both: with a practical framework for running customer discovery that actually scales, without needing a dedicated researcher for every question.
Let's be precise: customer discovery is the practice of structured learning about what customers need, how they work, and why they make decisions. It's intentional. It's documented. It's designed to prevent you from accidentally confirming what you already believe.
Where most product managers go wrong is thinking it has to be complicated. Discovery doesn't mean running a 30-person study with a moderator guide and a highlight reel. It means building a habit of asking real questions to real customers before you ship something that nobody wants.
The structural gap most product managers face: discovering customer needs usually requires researchers. And researchers are busy. They have roadmaps of their own. So discovery becomes a bottleneck.
The alternative that actually works: give product managers direct access to customers with enough structure that they can do discovery well. Not instead of research teams (teams are valuable). But in parallel. For the moments when you need to know something and waiting two weeks isn't an option.
This is why companies like Brex restructured their research entirely. Not to replace researchers, but to let every PM learn from customers continuously. The result: more customer touchpoints, faster decisions, and better products.
You don't need a fancy lab setup or a moderator to do customer discovery well. You need three things: conversations, breadth, and testing. Most PMs are missing at least one.
User interviews answer one question: why? Why do customers prefer your competitor? Why did they drop off during onboarding? Why do they keep using the product even though feature X is broken?
You don't need a researcher to ask good interview questions. You need structure and enough discipline to not lead the witness.
Here's the framework that works:
Before the interview: Know what you're trying to learn. Write it down. Not "how do you use our product" (too broad). Instead: "How do teams decide whether to adopt a new security tool?" or "What prevented you from renewing your subscription?" Specificity changes everything.
During the interview: Ask open questions first. "Tell me about the last time your team evaluated security software" is better than "Did you look at competitors?" The second triggers yes/no answers. The first gets story.
After the interview: Document what you heard. Not a transcript: a summary of insights and direct quotes. You'll thank yourself when you're presenting findings two weeks later.
The mistake most PMs make: they run one interview and make a decision. One conversation is anecdote, not discovery. You need at least 5-8 conversations before you should trust a pattern. I also saw a fair bit of what I call "over-research" - where product managers want to do discovery with 50-60 customers, completely burning through the available customer panel.
Great Question's user interview features let you recruit participants, run interviews, and share findings. No toggling between recruiting calls, spreadsheets, and Slack.
Interviews tell you the story. Surveys tell you whether the story is real. Whether what you heard from five customers is something 50% of your customer base experiences, or just the vocal five.
Here's the difference between a survey that teaches you something and a survey that wastes everyone's time:
A good survey asks about behavior or priorities. "How often do you use feature X?" "When evaluating a new tool, what matters most?" A bad survey asks leading questions or assumes context. "Don't you agree that mobile support is critical?" kills the value.
A good survey is short. Five to seven questions. People answer them. A 20-question survey gets abandoned halfway through.
A good survey has clear context. "We're thinking about changing how this feature works. Tell us about how you currently use it" works. Random polling without context just adds noise.
The sweet spot for customer discovery: run a survey after your interviews to validate what you heard. If your interviews suggested that 80% of users bypass a particular workflow, a quick survey to 200 customers tells you whether that's true or just the customers you happened to talk to.
Great Question's survey features connect you to participants from your target market, so you're not just polling your loudest customers.
Here's the gap nobody talks about: customers are terrible predictors of their own behavior. They'll tell you they want a feature. They'll tell you they'd use something. And then they won't.
That's not because they're lying. It's because predicting your own behavior is cognitively hard. Watching them actually use something? That's truth.
Testing doesn't require a formal lab. It requires showing customers a prototype or feature and watching what they do. Not asking what they think. Watching.
An unmoderated test (where participants click through something on their own time) is faster than scheduled interviews. You get five people's worth of behavior data in 24 hours. Moderated testing (where you're on the call) gives you context for why they did what they did.
The hybrid approach that scales: run an unmoderated test with 8-10 participants to see what they do. If the results are clear (everyone gets stuck in the same place, everyone finds the value prop, whatever), you're done. If you're confused, pick 2-3 people to follow up with.
Great Question's testing platform means you can recruit real customers, show them a prototype, and see where they struggle. All without leaving your product.
The thing that kills most PM discovery: you ask too many favors of the same people.
You have 20 customers you know well. They're willing to talk. So naturally, you keep calling them. You ask them about feature ideas, you ask them about competitor research, you ask them about pricing. By month three, they're not responding anymore.
Scaling discovery means having a system bigger than "people I know."
Here's what that looks like:
Know your recruiting criteria. "We want to talk to mid-market companies" is different from "We want to talk to companies with 50-500 employees who use Slack." The second one is a recruiting criteria. It's specific. It's measurable. It lets you find new people without calling existing customers.
Build a pool. Not everybody needs to talk to you, but somebody does. You need enough potential participants that you can run three or four customer conversations a month without fatigue. For most products, that's 50-100 people who've opted into talking to you.
Use templates. This sounds mechanical, but it's what lets you move fast. A screener template (the short survey you send before recruiting someone), an interview guide template, a testing scenario. These save you hours and keep your discovery consistent. Great Question's Research Hub gives you templates built for this.
Document everything, and make it searchable. Not for a formal report (though that's useful). For your future self and every PM who comes after you. Six weeks from now you'll want to remember that three customers mentioned they needed X. Six months from now, a teammate will start a discovery cycle on the same topic and have no idea your research exists. That's the duplication trap I ran into at Atlassian. Great Question's research repository stores interviews, surveys, and findings in one searchable place, so the next person doesn't start from zero.
The companies that scale discovery don't do it by working harder. They do it by building structure that lets them work smarter.
ServiceNow cut their participant recruitment time from 118 days to 6 days by systematizing their process. Not by finding better customers. By having a system where they could reach customers consistently.
If you're not doing structured discovery as a PM, one of three things is happening:
Option 1: You're not talking to customers at all. This is the most common. Roadmaps get built on intuition, stakeholder requests, and data dashboards. Customers are abstract. This leads to products that solve problems nobody has.
Option 2: You're talking to customers, but without structure. You get enthusiastic feedback. You ship it. Six months later it's used by 3% of your customer base. The problem: unstructured conversations have brutal confirmation bias. You hear what confirms what you already believe.
Option 3: You're trying to use your research team, but the timeline doesn't work. This is the least common, but most painful. Your research team is good at what they do. They're also slammed. You wait three weeks for research. By then you've already made the decision.
The cost of these approaches isn't always obvious. It's not that you're shipping things customers hate (though sometimes that's true). It's the opportunity cost. The features that could've been better. The roadmap decisions that could've been different.
Asana shrank their feature validation timeline from 2 weeks to 2-3 days by giving PMs direct access to customer research.
You don't need buy-in from a research team or permission from leadership. You can start this week.
Week 1: Define what you want to learn.
Pick one decision you need to make. Not your whole roadmap. One thing. "Should we change our pricing model?" or "Will customers adopt feature X?" or "What's the top blocker for renewal?"
Write it down: "We want to understand [specific question] because [why it matters for our decision]."
That's your discovery goal.
Week 2: Interview 5-8 customers.
Use your recruiting criteria (mid-market, has Slack integration, active in the last 30 days, whatever). Find people who fit. Schedule 30-minute calls.
Ask open questions. Listen for stories, not yes/no answers. Take notes. Don't try to record everything. Just capture the key insights and direct quotes.
When you're done, spend an hour writing down patterns. "Three customers mentioned they use Slack for X." "Everyone struggled with onboarding because of Y." Patterns matter. Single mentions don't.
Week 3: Validate with a quick survey.
Take your interview insights and turn them into a survey. "Did you experience this issue?" "How important is this to your workflow?" 5-7 questions. Send it to 150 customers.
You'll either confirm what you heard or realise you were wrong. Both answers are useful.
Week 4: Test if possible.
If your decision is about a feature, build a prototype. Show it to 8-10 customers. See what they do. That's your decision data.
If your decision is not about a feature (should we enter a new market? What's holding back adoption?) skip this step.
Then: Make the decision. You have stories, pattern confirmation, and behavioral data. You're good to decide.
Here's the thing that makes product teams nervous about PMs doing their own customer research: rogue research. Unstructured conversations. Biased samples. Conclusions that don't hold water.
Those are real risks. And they're solvable with structure.
The companies that scaled PM-led discovery didn't do it by just telling PMs "go talk to customers." They did it by building guardrails:
Approved recruiting criteria. "We're talking to these types of customers because [reason]." This prevents sampling bias and keeps you focused.
Interview guides. Not scripts. Guides. Here's what we're trying to learn. Here's how to ask about it without leading the witness.
Documented findings. Every interview turns into a summary of insights, not a rambling transcript. This discipline catches your own confirmation bias.
Participant management. You can't call the same five customers 20 times a year. A system that manages your participant pool means you have fresh voices and nobody gets burned out.
Great Question was built around these guardrails. Recruiting criteria built into the platform. Interview templates. Automatic documentation. A managed participant pool so you're not just calling the same people.
The point: PMs doing customer research is good. PMs doing it without structure is how you get bad decisions. The infrastructure you need isn't complicated, but it matters.
Once you've run your first discovery cycle, the question becomes: how does this work when you have five PMs and 50 products?
The honest answer: it doesn't work without one person owning the system.
The best approach: one person (usually on a research or product team) owns the discovery infrastructure. They don't run all the research. They manage the participant pool, maintain templates, track what's being learned, and spot conflicts.
Brex took this approach and ended up with researchers who weren't running interviews. They were enabling PMs to run better interviews. The result: single digit researcher support on a team that scaled to 100+ researchers because they were supporting more discovery, not doing it all themselves.
Flight Centre's research team went from a bottleneck to a multiplier by enabling PMs. They saved $300-400K a year by reducing redundant research and participant fatigue.
Procare Systems saved $15K+ by having PMs do targeted discovery instead of running expensive external research for every small decision.
The other unlock at scale: AI synthesis. When you have 50 PMs running discovery across a year, you're sitting on hundreds of conversations. The insights are there...buried in transcripts and notes. Great Question's AI analysis surfaces patterns across studies, so you can ask "what have we already learned about onboarding friction?" and get an answer in seconds instead of digging through folders.
The pattern: when you scale PM-led discovery, you're not replacing research teams. You're allowing research teams to do higher-level work while PMs handle tactical discovery.
You can do basic customer discovery with Slack, Calendly, Google Forms, and a spreadsheet. Many PMs do.
Where it breaks down: managing a participant pool. Keeping interviews consistent. Remembering what you learned three months ago. Finding the right people for the right questions without calling your close contacts again.
That's where purpose-built research platforms matter.
Comparing Great Question to UserTesting or other tools, the key difference is speed and access. You need to talk to customers fast, on your timeline, without a research team coordinating. Most tools weren't built for PMs. They were built for research teams.
The features that matter if you're a PM doing discovery:
Prototype testing is the feature that most PMs underuse. Show a design or prototype to real customers and see exactly where they get stuck. No waiting for a research team. No moderation required.
You've probably heard "continuous discovery." It's a real concept. But it often assumes you have a dedicated researcher. Most PMs don't.
Continuous discovery for a PM is different:
The goal isn't to replace research teams. It's to get customer truth into your decisions faster, without waiting for infrastructure.
Can one PM really do effective customer research?
Yes, with structure. One PM can run 4-8 customer conversations a quarter, validate with a survey, and get meaningful insights. The key is knowing what you're trying to learn, having a consistent approach, and documenting everything.
What if your research team is uncomfortable with PMs doing research?
This is a real concern, but it's solvable. The best research teams don't do all research. They enable and quality-check it. Have a conversation: "We want to move faster on small decisions. Can you help us build a process that works?" Most research teams will say yes once they see the alternative is rogue research.
How many customers do you need to interview to make a decision?
For discovery: 5-8 conversations before you spot patterns. For validation: 150-200 survey responses if you need statistical confidence. For testing: 8-10 people on a prototype before you see where they struggle.
Should you recruit from your existing customer base, or outside it?
Both. Existing customers tell you about current problems and feature ideas. Prospective customers or churned customers tell you what could be different. Most discovery should be existing customers (they know your product). But every quarter, talk to 2-3 people who almost bought and didn't, or almost renewed and didn't.
What if you don't have direct access to customers?
This is the B2B enterprise problem. Your customers are behind a procurement wall. In that case, managed participant recruitment is essential. You need a way to reach people in your target role at companies matching your ICP, without asking sales to extract them from their accounts.
How do you prevent confirmation bias in your own interviews?
Document everything. When you write down what you heard, bias becomes visible. "That customer said they loved feature X" is different from "That customer said they used feature X. When asked about it unprompted, they said it was fine, but not essential." The second is more honest. Also: bring a colleague to interviews when possible. A second listener catches things you missed.
Can you do customer discovery asynchronously?
Yes. Surveys are asynchronous. Unmoderated testing is asynchronous. The advantage: scale and flexibility. The disadvantage: you don't get to ask follow-up questions. Use async for breadth (surveys) or observation (testing). Use interviews (synchronous) when you need to understand why.
What's the difference between customer discovery and customer validation?
Discovery is learning what problems exist and what customers need. Validation is confirming your solution actually solves the problem and customers will use it. Most PMs conflate these. In practice: discovery happens before you build, validation happens before you ship.
Customer discovery shouldn't require a researcher for every question.
Most PMs aren't avoiding customer conversations because they're too busy. They're avoiding them because the infrastructure feels complicated. Getting research participants takes too long. Running interviews requires a moderator. Analyzing findings needs someone trained in research. And finding out whether someone already answered your question six months ago? That used to mean asking around and hoping for the best.
But discovery at PM scale doesn't need to be complicated. It needs to be consistent.
You need a way to reach customers fast. A framework for asking good questions. Templates and guardrails so you're not starting from scratch every time. A documented approach so what you learn actually informs your decisions.
That's what scales discovery.
The companies crushing this (Brex, Asana, ServiceNow, Flight Centre) didn't do it by working harder. They did it by building structure that lets every PM talk to customers regularly. Not instead of having researchers. In addition to researchers. Because customer truth matters too much to wait for one team to do all the learning.
You can start this week. Pick one decision. Talk to five customers. See what you learn. Then make it a habit.
Want to see how customer discovery works in practice?
Check out how Brex scaled their research infrastructure, or explore Great Question's features built specifically for PMs doing customer research.
Tania Clarke is a B2B SaaS product marketer focused on using customer research and market insight to shape positioning, messaging, and go-to-market strategy.