The six traits of a dysfunctional practice in 2025
I wish I could say this more gently, but after working with over 45 UX research teams—from scrappy startups to Fortune 500 giants—I've seen the same patterns repeat over and over. The warning signs are almost always there, hiding in plain sight.
Here's the thing: while we researchers obsess over perfecting our interview guides and usability test designs, most of us completely ignore the health of our own teams. It's like being a doctor who never gets a check-up. And right now, in 2025, the symptoms are getting harder to ignore.
Between AI panic, budget slashing, and the relentless push to "democratize research," many teams seem to start cracking under pressure.
The dysfunction I'm seeing isn't just about less-than-ideal processes—it's about talented researchers burning out, insights getting buried, and entire departments becoming irrelevant.
I'm sharing these observations not to shame anyone (we've all been there), but because recognizing these patterns early can save your sanity, your career, and maybe your company’s entire research practice.
Walk into most UX research teams today and you'll find a wild mix: PhDs sitting next to bootcamp grads, seasoned researchers working alongside people who learned "on the job." This diversity can be a superpower—but without intentional mentorship, it becomes chaos.
The numbers tell the story: only 42% of UX researchers actually have formal training in research fields. Bootcamp graduates rate their own expertise significantly lower (3.17/5) compared to formally trained peers (3.90/5).
When I see teams where everyone's skill level is "all over the map"—as one frustrated manager described their 60-person team—it's usually because nobody's been assigned to actually teach.
Here's what this looks like in practice: junior researchers conducting interviews where they talk more than they listen (Erika Hall from Mule Design nails it in her book, Just Enough Research: "Conducting a good interview is actually about shutting up"). Senior researchers hoarding knowledge instead of sharing it. Analysis that varies wildly in quality depending on who did it.
The result? Teams become collections of isolated individual contributors rather than cohesive units. And on smaller teams, this problem is even tougher—solo researchers report the lowest satisfaction with training and support across the board.
Related read: The power of one: Success strategies for solo UXÂ researchers
Nothing screams dysfunction louder than treating research like a factory assembly line. You know the type: same survey template every month, same usability test script, same shallow findings delivered to stakeholders who don't act on them anyway.
This happens when teams get trapped in the "democratization" promise—the idea that anyone can do research if you just give them the right template. But as Nielsen Norman Group points out, UX research value comes from insights and decisions, not just execution.
When you pipeline research like a to-do list, you erode the very trust you're trying to build.
I've watched teams spend months churning out surface-level reports while their competitors use remote testing, behavioral analytics, and yes, even AI tools to generate deeper insights faster. The average researcher now juggles 13 different tools, yet many teams cling to their "old faithful" methods like they're security blankets.
The impact? These checkbox approaches often mean cutting study length or skipping synthesis to save time—exactly the steps where the magic happens. “Pay attention to what users do, not what they say" seems almost like a mantra in UX research, but you can't observe behavior when you're stuck running endless quick polls.
Related read: What they do, not what they say: Smarter ways to test real user intent
Here's a dysfunction most people don't talk about: the barely concealed rivalry between UX research and other insights-generating teams like market research, and sometimes even data analytics, or business intelligence.
I watched this play out at a major bank where a friend works. Leadership deliberately pitted two UX research teams against each other, letting them compete on the same project brief. The "winner" got recognition and more budget; the "loser" faced bad performance reviews which put them at higher risk of looming layoffs.
The result was predictable: hoarding information, duplicate work, territorial behavior, and insights that never connected into a bigger picture.
This isn't rare. When budgets tighten and layoffs are on the table, teams start fighting for survival instead of collaborating. UX researchers stop sharing findings with data teams. Market researchers refuse to loop in UX on customer studies. Everyone becomes protective of their slice of the pie, even when a bigger pie would benefit everyone.
Companies often reinforce this dynamic, either through deliberate competition or simple neglect, creating a toxic environment. They end up with multiple teams studying the same users in isolation, missing the forest for the trees.
Related read: Why it's time for research to move beyond UX
Many struggling teams aren't just dysfunctional—they're drowning. They lack the resources, integration, and support to do research well, but they're expected to deliver anyway.
The tool situation alone is maddening: researchers use an average of 13 different platforms, but without integration, they're managing a scattered mess of surveys, testing tools, repositories, and note-taking apps. Solo researchers report the lowest satisfaction with their tool stack, budgets, and leadership buy-in.
When you're firefighting with broken equipment, best practices become luxuries.
Only half of teams have dedicated Research Operations (ReOps) support, and the average ReOps person supports 21 researchers—a very challenging ratio for maintaining quality and consistency. Without ReOps, research tasks pile onto already busy UXRs and PMs, and critical work can slip through the cracks.
Add in the fact that over 50% of companies have decentralized or mixed research structures—some researchers in product teams, others in central groups—and you get chaos. Projects overlap, methods vary wildly, insights don't flow freely, and nobody owns the overall strategy.
{{consolidate-research-toolstack="/about/ctas"}}
Too many companies still treat UX research like expensive wallpaper—nice to have, easy to remove when money gets tight.
According to Optimal Workshop, only 16% of organizations have fully embedded UX research into their processes and culture. A staggering 56% don't measure research impact at all. When you don’t have clear ROI metrics, you're at much higher risk to land on the chopping block during downturns.
The 2023-24 layoffs proved this. UX researcher job postings fell 73% year-over-year. Half of UXRs surveyed were directly or indirectly affected by layoffs. Meanwhile, leadership demanded "faster, cheaper research" while simultaneously asking teams to "prove your value."
This creates a vicious cycle: under-resourced teams can't demonstrate impact, so they get fewer resources, so they can't demonstrate impact.
I've met researchers whose companies prioritize A/B tests over user interviews because "that's easier to quantify," missing the deeper insights that actually drive breakthrough products.
AI has injected new dysfunction into teams that were already struggling. Some teams slap "AI-powered research" on everything as a magic solution. Others ban AI tools entirely out of fear. Both approaches miss the point.
About 77% of UXÂ researchers report using AIÂ tools in at least some aspect of their work. But most proceed without a clear strategy.
The dysfunction shows up in two extremes: teams that think AI can replace human insight (spoiler: it can't), and teams that refuse any AI assistance, even for mundane tasks like transcription. Neither approach is healthy.
The smart path, as Nielsen Norman Group suggests, is using AI to enhance human capabilities while developing deeper UX skills. But dysfunctional teams lack that nuance—they either panic about their own obsolescence or chase shiny new features without understanding their limitations.
{{try-great-question-ai="/about/ctas"}}
Whether you're managing a team or interviewing for a role, these questions will help you spot red flags and uncover dysfunction fast:
Who actually owns the research strategy here? Is there a ReOps person? → Remember: half of companies lack ReOps entirely. If you hear “everyone just pitches in,” that’s a concern.
‍How do new researchers get trained beyond "figuring it out"? Who teaches research techniques and analysis methods, if at all? → “On-the-job” learning is common, but ask what that looks like day-to-day.
‍How do UX research, market research, and data teams collaborate at this company? → If teams work in complete isolation or there's confusion about who does what type of research, that's a red flag. Good collaboration means clear boundaries but shared insights.
How do you involve non-research stakeholders? Are there guidelines or training for them? → Flippantly saying “anyone can do research” often means “no one is doing it well.” As one expert puts it, democratization demands intentional planning.Â
What research methods do you use, and how do you decide when to try new approaches? → Beware if “we just do surveys and quick tests” is the default.
‍How do your research tools work together? Can you easily connect insights across studies? What tools comprise your “tool stack,” and how do you manage the research repository? → Remember 13 tools is the average and you’d want them integrated, not scattered chaos.
‍How do you measure research success? Is there executive pressure for certain results or ROI? → Look for signals like “we need research to show a 2x ROI” – if that’s demanded in isolation, it can lead to biased studies.
What's your approach to AI in research? Are you providing training and ethical guidelines? → Are they providing time-saving tools (e.g. AI transcriptions) and training to use them ethically? Or are they panicking, or overhyping it with no plan?
‍How is research funded during tight budget cycles? What support exists for solo researchers? → If they don’t have answers about KPIs or outcomes, research may be an afterthoughtÂ
What does career growth look like for researchers here? Is there a growth path and if so, how are researchers supported to progress in their career? → If they struggle to answer, it could mean the team doesn’t invest in professional growth.
The healthiest teams I've worked with share common traits. They:
They also recognize that some problems—like inter-departmental turf wars or company-wide budget cuts—can't be solved by research teams alone.
Sometimes the healthiest thing is acknowledging when a culture can't be fixed from within.
But here's what gives me hope: every dysfunctional team I've encountered had people who cared deeply about doing better work. Recognition is the first step. From there, you can start building the kind of research practice that actually delivers value—for users, for businesses, and for the researchers who pour their energy into it.
The question isn't whether your team has some of these issues (most do). The question is whether you're willing to address them before they become unfixable.
Johanna is a freelance Senior UX researcher and UX advisor, co-founder of UX consulting firm Jagow Speicher, and a researcher at heart. Working with diverse UX teams, she helps them mature, run impactful research, manage and optimise their UX practice, design powerful personalisation campaigns, and tackle change management challenges. Outside of work, she's writing about all things UX Research, UX Management, and ResearchOps. Feel free to reach out by email or go to my website to learn more. 👋🏼