How are product builders using MCP's in 2026?
How 113 product and research professionals are connecting their work to AI — and what they plan to do with it.
The place where work gets done has moved. Claude, ChatGPT, Gemini, Cursor: these are the spaces where PMs write specs, designers draft concepts, and researchers think through findings. Research teams are starting to bring their data into those spaces instead of keeping it locked in tools that only researchers open.
Teams aren't waiting to see if connecting research to AI is worth it. They're already doing it the hard way — downloading transcripts, copying quotes, pasting highlights into prompts one at a time. What they want is a direct connection.
113 product and research professionals across 90+ companies
Product managers, designers, and ops professionals made up more than half of respondents. Over 15% hold Director, VP, or Head-level titles — the people deciding how their teams work, not just experimenting on weekends.
Role breakdown — 113 respondents
Role breakdown — 113 respondents
Claude, by a wide margin
Nearly 9 in 10 respondents use Claude as part of their regular workflow. This audience has already adopted AI. The question now is what research data they can bring into it.
Only 31% of respondents stick to a single LLM. The rest use two or more. Claude appears in every top combination — but the variety tells you something: teams move between tools depending on the task. Research data that only works in one environment misses more than half the workflow.
One in three respondents uses Cursor, a tool built for writing code. Its presence in a research survey tells you the line between “people who build” and “people who research” is getting blurry. They want research context while they do it.
LLM adoption — % using each tool regularly
How many LLMs do you use regularly?
LLM adoption — % using each tool regularly
How many LLMs do you use regularly?
Nearly 9 in 10 respondents use Claude as part of their regular workflow. This audience has already adopted AI. The question now is what research data they can bring into it.
Most teams use more than one tool
Only 31% of respondents stick to a single LLM. The rest use two or more, with the sweet spot at two to three tools.
Most common tool combinations
| Combination | % of respondents |
|---|---|
| Claude only | 27% |
| ChatGPT + Claude + Gemini | 13% |
| ChatGPT + Claude + Gemini + Cursor | 10% |
| Claude + Gemini | 10% |
| ChatGPT + Claude + Cursor | 7% |
| Claude + Cursor | 7% |
Claude appears in every top combination. But the variety tells you something: teams move between tools depending on the task. Research data that only works in one environment misses more than half the workflow.
Cursor is worth pausing on
One in three respondents uses Cursor, a tool built primarily for writing code. Its presence in a research-oriented survey tells you the line between “people who build” and “people who research” is getting blurry. Product designers use Cursor to prototype. PMs use it to explore data. And they want research context while they do it.
One senior UX researcher described pulling insights into Cursor to combine qualitative and quantitative research and build internal tools for her team. The research workflow and the build workflow are becoming the same workflow.
Cursor is worth pausing on
We asked respondents to describe how they’d use a direct connection between their research repository and their AI tools. The open-ended responses fell into clear tiers.
What teams want to do with connected research
What teams want to do with connected research
The open-ended responses fell into clear tiers
The simplest, most common request: stop copy-pasting. Teams want to pull research directly into Claude without switching context. The research should come to where the work is happening.
"Today I'm downloading transcripts manually into Claude to summarize calls into main user problems. It would be so cool to simply query the whole set of past interviews."
Head of Product
Ask questions across everything a team has ever learned. Not the last study. All of it, searchable in natural language.
"We want to be able to chat with our research repository and ask questions across transcripts to surface cross-study findings."
Sr. Director, Product Design
Rather than manually reviewing five studies to find patterns, teams want AI to surface them — especially for planning cycles and leadership readouts.
"The biggest use case is pulling research insights into a single repository to synthesize themes across studies and build artifacts and deliverables."
Director of UX Research
Cut the operational overhead of running studies: scheduling, screener setup, follow-ups, status tracking. More time for analysis, less for logistics.
"Automate customer discovery, transcription, scheduling and note synthesis and sharing."
Product Manager
Pull the key moments from a 60-minute recording instead of scrubbing through the whole thing. Create highlight reels without the manual work.
Build an AI-powered research repository that anyone in the org can query, not just the researcher who ran the study.
"We are building a 'living insights ecosystem' to replace our static research repo — connecting all our research tools to Claude Code to have a central agent to train as our 'librarian.'"
ResearchOps Lead
Making research available to people who don't typically open research tools: sales, marketing, support, execs. Research as an org-wide resource, with guardrails.
"Democratizing research and making it easier for people across the company to integrate research findings into their regular work."
UX Research Manager
What connected research looks like in practice
A PM needs insights from a study six months ago. They message the researcher. The researcher finds the report, pastes three quotes into Slack. The PM uses two. The rest stays buried.
The PM opens Claude and types: "What have we learned about how users find this feature?" Claude pulls highlights from three studies, cites each source, and flags a pattern they hadn't considered. Four minutes.
The research didn't change. Who could reach it did.
MCP: the infrastructure making this possible
MCP (Model Context Protocol) is an open standard that lets AI tools query external data sources directly. Instead of exporting and pasting, the AI tool reaches into your research repository and pulls what it needs. Your data stays where it is. The AI comes to it.
Studies, sessions, transcripts, participant data, highlights, insights, reels, full-text search across your entire repository
Create screener surveys, moderated and unmoderated study types, add candidates to studies
Claude Desktop, Claude Code, ChatGPT, Cursor
What we took away
The adoption question is settled
88% of respondents use Claude. 43% use ChatGPT. Most use multiple tools. These teams aren't evaluating whether to use AI — they're trying to feed it better inputs. The missing piece is a direct line from the research to the AI workspace.
The demand is specific
The top two use cases — pulling insights into AI (62%) and cross-study querying (55%) — describe a concrete gap teams are already working around manually. MCP removes the manual step.
The audience is broader than researchers
Product designers (19%), product managers (13%), and Cursor users (32%) all showed up in real numbers. When research is accessible inside the tools they already use, the reach of every study expands.
Connect research to your AI tools
.png)
Great Question
Connect your research to the AI tools where decisions get made
This 2026 report is based on a survey of 113 product and research professionals across 90+ organisations, conducted by Great Question in April 2026. All data is self-reported.
