⚡ Great Question · 2026 Report

How are product builders using MCP's in 2026?

How 113 product and research professionals are connecting their work to AI — and what they plan to do with it.

113
Respondents
90+
Companies
88%
Use Claude
78%
MCP-ready now

The place where work gets done has moved. Claude, ChatGPT, Gemini, Cursor: these are the spaces where PMs write specs, designers draft concepts, and researchers think through findings. Research teams are starting to bring their data into those spaces instead of keeping it locked in tools that only researchers open.

Teams aren't waiting to see if connecting research to AI is worth it. They're already doing it the hard way — downloading transcripts, copying quotes, pasting highlights into prompts one at a time. What they want is a direct connection.

Who we surveyed

113 product and research professionals across 90+ companies

Product managers, designers, and ops professionals made up more than half of respondents. Over 15% hold Director, VP, or Head-level titles — the people deciding how their teams work, not just experimenting on weekends.

Role breakdown — 113 respondents

UX & User Researchers
33%
Product Designers
19%
Product Managers
13%
Research & Design Ops
12%
Other
8%
Design Leaders
7%
Product Leaders
5%
Research Leaders
3%

Role breakdown — 113 respondents

UX & User Researchers
33%
Product Designers
19%
Product Managers
13%
Research & Design Ops
12%
Other
8%
Design Leaders
7%
Product Leaders
5%
Research Leaders
3%
AI tools in use

Claude, by a wide margin

Nearly 9 in 10 respondents use Claude as part of their regular workflow. This audience has already adopted AI. The question now is what research data they can bring into it.

Only 31% of respondents stick to a single LLM. The rest use two or more. Claude appears in every top combination — but the variety tells you something: teams move between tools depending on the task. Research data that only works in one environment misses more than half the workflow.

One in three respondents uses Cursor, a tool built for writing code. Its presence in a research survey tells you the line between “people who build” and “people who research” is getting blurry. They want research context while they do it.

LLM adoption — % using each tool regularly

Claude
88%
ChatGPT
43%
Gemini
41%
Cursor
32%
CoPilot
8%

How many LLMs do you use regularly?

Just one
31%
Two
35%
Three
25%
Four or more
9%

LLM adoption — % using each tool regularly

Claude
88%
ChatGPT
43%
Gemini
41%
Cursor
32%
CoPilot
8%

How many LLMs do you use regularly?

Just one
31%
Two
35%
Three
25%
Four or more
9%

Nearly 9 in 10 respondents use Claude as part of their regular workflow. This audience has already adopted AI. The question now is what research data they can bring into it.

Most teams use more than one tool

Only 31% of respondents stick to a single LLM. The rest use two or more, with the sweet spot at two to three tools.

Most common tool combinations

Combination% of respondents
Claude only27%
ChatGPT + Claude + Gemini13%
ChatGPT + Claude + Gemini + Cursor10%
Claude + Gemini10%
ChatGPT + Claude + Cursor7%
Claude + Cursor7%

Claude appears in every top combination. But the variety tells you something: teams move between tools depending on the task. Research data that only works in one environment misses more than half the workflow.

Cursor is worth pausing on

One in three respondents uses Cursor, a tool built primarily for writing code. Its presence in a research-oriented survey tells you the line between “people who build” and “people who research” is getting blurry. Product designers use Cursor to prototype. PMs use it to explore data. And they want research context while they do it.

One senior UX researcher described pulling insights into Cursor to combine qualitative and quantitative research and build internal tools for her team. The research workflow and the build workflow are becoming the same workflow.

Cursor is worth pausing on

We asked respondents to describe how they’d use a direct connection between their research repository and their AI tools. The open-ended responses fell into clear tiers.

What teams want to do with connected research

Pull insights into AI tools
62%
Cross-study querying
55%
Synthesize themes
30%
Automate workflows
19%
Video highlights
17%
Reports & deliverables
16%
Living knowledge base
15%
Product strategy
10%
Cross-functional access
8%

What teams want to do with connected research

Pull insights into AI tools
62%
Cross-study querying
55%
Synthesize themes
30%
Automate workflows
19%
Video highlights
17%
Reports & deliverables
16%
Living knowledge base
15%
Product strategy
10%
Cross-functional access
8%
What teams want to do

The open-ended responses fell into clear tiers

Tier 1 — Core demand
Pull insights into AI tools62%

The simplest, most common request: stop copy-pasting. Teams want to pull research directly into Claude without switching context. The research should come to where the work is happening.

"Today I'm downloading transcripts manually into Claude to summarize calls into main user problems. It would be so cool to simply query the whole set of past interviews."

Head of Product

Cross-study querying55%

Ask questions across everything a team has ever learned. Not the last study. All of it, searchable in natural language.

"We want to be able to chat with our research repository and ask questions across transcripts to surface cross-study findings."

Sr. Director, Product Design

Synthesize themes across studies30%

Rather than manually reviewing five studies to find patterns, teams want AI to surface them — especially for planning cycles and leadership readouts.

"The biggest use case is pulling research insights into a single repository to synthesize themes across studies and build artifacts and deliverables."

Director of UX Research

Tier 2 — High-value workflows
Automate research workflows19%

Cut the operational overhead of running studies: scheduling, screener setup, follow-ups, status tracking. More time for analysis, less for logistics.

"Automate customer discovery, transcription, scheduling and note synthesis and sharing."

Product Manager

Video highlights and transcripts17%

Pull the key moments from a 60-minute recording instead of scrubbing through the whole thing. Create highlight reels without the manual work.

A living knowledge base15%

Build an AI-powered research repository that anyone in the org can query, not just the researcher who ran the study.

"We are building a 'living insights ecosystem' to replace our static research repo — connecting all our research tools to Claude Code to have a central agent to train as our 'librarian.'"

ResearchOps Lead

Tier 3 — Where this is heading
Cross-functional access8%

Making research available to people who don't typically open research tools: sales, marketing, support, execs. Research as an org-wide resource, with guardrails.

"Democratizing research and making it easier for people across the company to integrate research findings into their regular work."

UX Research Manager

Day-to-day impact

What connected research looks like in practice

Before

A PM needs insights from a study six months ago. They message the researcher. The researcher finds the report, pastes three quotes into Slack. The PM uses two. The rest stays buried.

After

The PM opens Claude and types: "What have we learned about how users find this feature?" Claude pulls highlights from three studies, cites each source, and flags a pattern they hadn't considered. Four minutes.

The research didn't change. Who could reach it did.

Prompts teams want to run
“What are the top reasons users drop off during onboarding?”
“Summarize everything we know about how enterprise customers evaluate tools like ours”
“What did participants say about [competitor] in our last three studies?”
“Compare what we learned in Q1 vs Q3 about [workflow]”
“Create a brief summarizing all research relevant to [upcoming initiative]”
Why now

MCP: the infrastructure making this possible

MCP (Model Context Protocol) is an open standard that lets AI tools query external data sources directly. Instead of exporting and pasting, the AI tool reaches into your research repository and pulls what it needs. Your data stays where it is. The AI comes to it.

78%
of respondents said they can install MCP servers in their organization today.
Read

Studies, sessions, transcripts, participant data, highlights, insights, reels, full-text search across your entire repository

Write

Create screener surveys, moderated and unmoderated study types, add candidates to studies

Supported tools

Claude Desktop, Claude Code, ChatGPT, Cursor

Key takeaways

What we took away

The adoption question is settled

88% of respondents use Claude. 43% use ChatGPT. Most use multiple tools. These teams aren't evaluating whether to use AI — they're trying to feed it better inputs. The missing piece is a direct line from the research to the AI workspace.

The demand is specific

The top two use cases — pulling insights into AI (62%) and cross-study querying (55%) — describe a concrete gap teams are already working around manually. MCP removes the manual step.

The audience is broader than researchers

Product designers (19%), product managers (13%), and Cursor users (32%) all showed up in real numbers. When research is accessible inside the tools they already use, the reach of every study expands.

MCP

Connect research to your AI tools

Give Claude, ChatGPT, or Cursor direct access to your research repository. One connection to all your customer insights.
Summaries icon
Query your entire research library from any MCP-compatible AI tool
Interview chapters icon
Synthesize across studies, transcripts, and highlights in a single prompt
Highlight icon
Pull findings into PRDs, design reviews, and Slack threads mid-conversation
Research tag icon
PII redacted by default, with role-based permissions applied automatically

Great Question

Connect your research to the AI tools where decisions get made

This 2026 report is based on a survey of 113 product and research professionals across 90+ organisations, conducted by Great Question in April 2026. All data is self-reported.