
Prototype testing means putting a version of your product in front of real users before you launch to everyone. Users interact with your prototype or early product. You watch what they do, where they struggle, and what confuses them. Then you fix what broke before shipping to everyone else.
In 2026, teams run prototype tests on Figma mockups, working prototypes, and live products. Both before a feature ships and after. The goal is always the same: reduce the risk of building something people don't want or can't use.
You can launch a product and iterate based on user feedback. That works. Millions of teams do it.
But there's a cheaper way: test before you launch.
The difference between "launch and see what happens" and "test with five users first" is about three days and $300 to $500 in user incentives. The return: you don't spend weeks shipping features nobody understood, fixing workflows that confused people, or managing support tickets for something your user base didn't need.
One usability test with five users catches about 85% of usability problems. That's not from some old textbook. Nielsen Norman Group—the research authority—ran this analysis on hundreds of studies across decades. Five users. 85% of issues found. After five, each additional user teaches you less because you're seeing the same problems again.
The compounding return comes from fixing those issues before shipping. Let's say you skip testing and launch. You get real users, real feedback, real data. Great. But now you have to ship a fix, monitor rollout, answer support tickets, and potentially deal with churn from people who had a bad experience with V1. With prototype testing, you fix before they ever see it.
Figma mockups (static prototypes)
Best for testing concept and flow early. Users interact with a clickable Figma prototype—no code, just interactions. You learn whether the idea makes sense before you build it.
Pros: Fast to create, cheap to iterate, low stakes if you need to change direction
Cons: Not interactive, doesn't show real system behavior, no actual data flows
Working prototypes (interactive but not production)
Best for testing interaction and UX patterns. Built with code (React, Vue, etc.) but not connected to production systems. Users see interactions that feel more real than Figma but with test data.
Pros: Real interactions feel more realistic, you catch UX patterns that static mockups miss
Cons: Takes longer to build than Figma, still test data and not real workflows
Live product (production or staging)
Best for testing the real thing before wider release. Staging environment or production canary, fully functional, real data, real workflows.
Pros: Catches issues that test environments miss, real-world behavior and actual workflows
Cons: Higher stakes if something breaks, more complex to set up
Most teams run tests at multiple points:
Early (before build): Figma mockup → learn if the concept works
Mid-build (during): Working prototype → learn if the interaction pattern works
Pre-launch (after): Live product or staging → learn if the real product works
Moderated testing (live, 1:1)
You're on a video call with the participant while they use your prototype. You watch them in real time. You can ask follow-up questions. You can clarify confusing moments on the spot.
Pros: You get depth, can ask "why did you do that?", can probe unexpected moments, real-time conversation
Cons: Requires scheduling (coordination overhead), you need to stay neutral and not help too much, limited to one person at a time
Unmoderated testing (async, recorded)
Participant records themselves using your prototype on their own schedule. They narrate as they go. You review the video afterward.
Pros: Scales to more participants (no scheduling), participants use it whenever they want, you can review at your own pace
Cons: No follow-up questions, you're limited to what they say, less depth on motivations
For most prototype testing, either works. Moderated gives you more depth. Unmoderated scales. If you're testing a clear task ("sign up for this service"), unmoderated is fine. If you're testing a nuanced concept ("does this pricing model feel fair?"), moderated gives you more.
From your network
Ask people you know who fit your user profile. "I'm testing a new feature. Could you spend 30 minutes on a call with me this week?" Most people say yes if they know you.
Your existing customer base
Email your users or ask in Slack/Discord. Offer a $50 gift card. People who've already paid for your product tend to have strong opinions about features.
An external research panel
If you need people fast and don't have a network, Great Question's external recruitment panel gives you access to 6M+ verified professionals. You can filter by role, industry, and experience. Recruiting takes 24 to 48 hours.
Regardless of where you recruit, use a screener survey. Two or three questions to confirm they actually match your user profile. Testing with the wrong people gives you misleading signal.
Setup (5 minutes)
Build rapport. "Thanks for taking the time. This is going to be casual. I'm going to ask you to do a few things with this prototype. If you get stuck, that's totally fine. That's what I'm here to learn about."
Ask context questions about their current workflow. "How do you currently handle [thing your product solves]? What's frustrating about it?" This gives you baseline context.
Task (10-15 minutes)
Give them a specific task. Not a tour. Not "try to find your way around." A task: "You just heard about this and want to [specific goal]. Go ahead."
Then stop talking. Watch. Take notes on:
Where they click first
Where they pause or hesitate
What they say out loud ("hmm," "I wonder if...")
Where they try something that doesn't work
Whether they complete the task
How long it takes
The temptation to help is strong. Don't. If they're stuck, they're stuck. That's the feedback. Your explanation removes the signal.
Follow-up (5 minutes)
After the task, ask:
"Walk me through what you were thinking when you [specific moment]."
"What did you expect to happen when you clicked there?"
"If you told a friend about this, what would you say?"
"What almost made you stop?"
These questions surface the reasoning behind the behavior. The behavior shows you what broke. The follow-up tells you why.
Right after each session, write down:
The one most important thing you learned
One quote from the participant that captures something real
One specific thing you'd change if you could
Do this immediately. Notes written an hour later are half as useful.
After five sessions, look for patterns. A problem appearing in three of five sessions is worth fixing. Something appearing once might be individual.
If you're using Great Question, AI analysis of your transcripts surfaces themes and links them back to specific quotes automatically. What used to take an afternoon takes 30 minutes.
Test when:
You're testing a new concept (users might not understand it)
You're testing a new user group (you have assumptions about how they work)
You're testing a critical flow (signup, payment, core feature)
You've made a significant change to an existing flow
You have time before launch (if you're shipping in a week, testing still makes sense)
Ship when:
You've run at least three to five tests (enough to see patterns)
Common problems have been fixed
You're not learning anything new from additional tests
Pre-launch (before anyone outside your team has seen it)
Test with people outside your company who match your target users. Run five users through your prototype. Fix the high-signal issues. Ship knowing you caught the biggest problems before your users saw them.
Post-launch (you have real users)
Test new features with your existing user base before shipping to everyone. Recruit from your customer list. These users know your product; testing is faster because there's less context-setting. They also tend to have strong opinions about what's broken, which is more valuable than feedback from strangers who don't know your product yet.
How many users do I need to test with?
Five is the standard for usability testing. Five users catch about 85% of usability problems. After five, you're mostly seeing the same issues again. For confirming a pattern across a larger user base, test more. For checking if a flow works, five is enough.
What's the difference between a prototype and a full product?
A prototype is a version of your product built specifically for testing. It might be a Figma mockup, a working version with test data, or a staging environment. The point is it's not the version your real users see. Testing on a prototype lets you iterate and fix without affecting your production environment or real users.
Do I need a researcher to run prototype tests?
No. Prototype testing is straightforward: give the person a task, watch them do it, ask why afterward. Any founder or PM can run it. Having a researcher is nice for complex research. Pre-launch testing of a feature is not complex research. It's you, five users, and one question.
Can I test on both Figma and code prototypes?
Yes. Test on Figma early. Test on a working prototype as you build. Test on the live product before launch. Each reveals different things. Figma catches flow issues. Working prototypes catch interaction issues. The live product catches real-world behavior issues.
What if I don't have time to recruit participants?
If recruiting is the blocker, an external panel solves it. Great Question's panel is 6M+ people. You can recruit people in your user profile in 24 to 48 hours. Is that more expensive than recruiting from your network? Yes. Is it faster? Also yes. If you're launching in a week and want to test, external recruitment is the lever.
Should I test on mobile, desktop, or both?
Test on whatever your users will actually use. If your product is mobile-first, test on mobile. If it's web, test on web. If it's both, test on both. But start with what you're optimizing for. You don't have time to test on every device; test on the one that matters most.
What if nobody shows up to the test session?
That means your recruit confirmation process failed. Get confirmation the day before. Send a reminder two hours before the session. Ask for a video call link confirmation. This prevents no-shows. If someone still doesn't show, it's time to test with backup participant or recruit someone else. Have a backup plan.
Can I test with people I know?
Yes, but be aware of the bias. People you know tend to be polite and less direct about what's broken. They also know you and might not represent your actual target user. Test with people you know plus people you don't. The strangers will be more honest.
Prototype testing is simple. Five people. One task each. Watch them. Ask why. Fix what broke. Then ship.
Ready to start? Great Question supports both moderated and unmoderated prototype testing. Set up a study, recruit participants, and see where your users get stuck before they ever see your live product.
Related: How to test your Lovable app with real users · AI Moderated Interviews for larger-scale research · How to validate your vibe-coded app
Tania Clarke is a B2B SaaS product marketer focused on using customer research and market insight to shape positioning, messaging, and go-to-market strategy.