The 12× Problem: Why Teams Don’t Do Enough Research (And What Changes Now)

The 12× Problem: Why Teams Don’t Do Enough Research (And What Changes Now)

Most teams don’t lack research methods, they lack the capacity to test decisions. Across 20 companies, we found that teams need up to 12× more research than they actually conduct. This gap isn’t about discipline. It’s a structural problem. And with the rise of AI, it’s finally starting to shift.

Most teams don’t lack research methods, they lack the capacity to test decisions. Across 20 companies, we found that teams need up to 12× more research than they actually conduct. This gap isn’t about discipline. It’s a structural problem. And with the rise of AI, it’s finally starting to shift.

Apr 10, 2026

Most product and UX teams don’t suffer from a lack of methods.
They suffer from a lack of capacity to test decisions.

Across 20 companies, from global brands to fast-moving product teams, we observed a consistent pattern:

Teams need roughly 12× more research than they actually conduct to do their jobs well.

This gap isn’t theoretical. It shapes how products are built, how risks are taken, and ultimately, how much guesswork organizations tolerate.

The problem is not that teams don’t believe in research.
The problem is that research does not scale with the speed of decision-making.

Research Isn’t Failing. The Workflow Is.

Most teams follow some version of this process:

  • Conduct interviews at the beginning

  • Build and iterate internally

  • Validate at the end

In theory, this is human-centered design.

In practice, it looks different.

  • Interviews are limited (often N=5)

  • Recruitment is imperfect (sometimes convenience sampling)

  • Testing happens late

  • Most assumptions remain unvalidated

The result?

Teams often test only ~30% of their key assumptions.

The rest is intuition, experience or simply pressure to move forward.

This is not a failure of discipline.
It is a structural bottleneck.

Why Teams Don’t Ask Users More Often

If research is so valuable, why don’t teams do more of it?

Because every interaction with a real user comes with friction:

  • Recruiting the right participants

  • Coordinating schedules

  • Managing incentives and logistics

  • Allocating internal capacity

  • Protecting sensitive or early-stage ideas

And most importantly:

Users have limited time, attention, and availability.

If someone is free on a Tuesday morning to join your test, they may not be the user you actually need.

Research, in its current form, is expensive to orchestrate.

So teams prioritize.

They choose which questions are “worth” asking.

And in doing so, they accept blind spots.

The Real Bottleneck: Availability of Testing

We often frame research as a question of quality.

Better methods. Better sampling. Better questions.

But in reality, the bigger issue is availability.

Teams are not limited by how well they can test.
They are limited by how often they can test.

And that changes everything.

Because decision-making doesn’t slow down to match research capacity.

It speeds up.

From Scarcity to Abundance

This is where AI changes the game, not by being “smarter,” but by being available.

AI does not solve research quality by default.
It solves something more fundamental:

It removes the bottleneck of access.

When testing becomes fast, cheap, and always available:

  • You don’t have to choose which assumptions to validate

  • You don’t delay decisions waiting for fieldwork

  • You don’t stop because recruitment is too complex

Instead, the question shifts:

From: Which hypotheses can we afford to test?
To: How many hypotheses can we test?

This is a structural shift in how research operates.

The Shift: From Research Projects to Decision Systems

Traditionally, research is organized into projects. It has a start, a budget, a timeline, and a deliverable. But when testing becomes continuously available, research changes form.

It becomes:

  • Embedded in daily decision-making

  • Distributed across teams

  • Iterative rather than episodic

This is what many refer to as continuous discovery. But it’s often misunderstood.

Continuous discovery is not about doing more interviews.
It’s about removing latency between decision and feedback.

What Happens When Testing Scales

When teams gain the ability to test more frequently, several things change:

1. More Assumptions Get Tested
Instead of validating a fraction of ideas, teams can explore a broader solution space.

2. Iteration Becomes Cheaper
Ideas can be refined early, before costly development.

3. Risk Moves Earlier
Uncertainty is addressed upfront, not discovered post-launch.

4. Human Research Becomes More Valuable
Paradoxically, when basic testing is offloaded, human interaction is used more intentionally—for depth, nuance, and empathy.

This Is Not the End of Human Research

A common concern is that AI will replace users. It won’t. But it will change the role of human interaction.

Instead of using people for:

  • Basic validation

  • Repetitive testing

  • Early filtering

Teams can focus human research on:

  • Complex behaviors

  • Emotional context

  • Co-creation

  • Strategic decisions

In other words:

AI expands research volume.
Humans increase research depth.

The Real Opportunity

The biggest opportunity is not faster research. It is better decision systems.

When testing is no longer scarce:

  • Teams rely less on opinion

  • Discussions shift from debate to evidence

  • Exploration becomes less risky

And most importantly:

Organizations can afford to test more decisions before committing to them.

Conclusion: The 12× Gap Will Close

The gap between needed and actual research, the “12× problem”, has existed for years. Not because teams didn’t care. But because the system made it impossible to close.

Now, that constraint is weakening. And as it does, the question is no longer:

Do we have enough research?

But rather:

Are we designing our workflows to take advantage of it?

TL;DR

  • Teams need ~12× more research than they currently conduct

  • The bottleneck is not methodology, but availability

  • AI shifts research from scarce to abundant

  • This enables continuous discovery

  • The real impact is not better insights but better decisions at scale

Get in touch

Collect unlimited opinions from 4k/month

Got a question or idea? Let’s talk! Just drop us a message and we’ll get back to you shortly.

Get in touch

Collect unlimited opinions from 4k/month

Got a question or idea? Let’s talk! Just drop us a message and we’ll get back to you shortly.

We make market research affordable.

Lakmoos answers surveys with data models instead of real people. We aim to replace 20 % of traditional surveys with real-time insights by 2030, saving $30 Bn in research costs and 35 Bn hours of fieldwork globally each year.

Quick contact

Příkop 843/4

Brno 60200

VAT CZ19395108

Lakmoos AI s.r.o. 

Copyright © 2025 Lakmoos. All rights reserved.

We make market research affordable.

Lakmoos answers surveys with data models instead of real people. We aim to replace 20 % of traditional surveys with real-time insights by 2030, saving $30 Bn in research costs and 35 Bn hours of fieldwork globally each year.

Quick contact

Příkop 843/4

Brno 60200

VAT CZ19395108

Lakmoos AI s.r.o. 

Copyright © 2025 Lakmoos. All rights reserved.