Sep 15, 2025
The rise of generative AI has transformed the business landscape. From auto-writing marketing copy to summarizing dense reports, tools like OpenAI’s ChatGPT and other large language models (LLMs) have made content generation faster and more scalable than ever.
So it’s no surprise that these tools are now being applied to consumer research. Why run a traditional survey when you can just ask a chatbot what your target audience might say?
Here’s the catch: AI-generated insights only work if the AI is built to simulate behavior, not just produce plausible language. And that’s where most companies are getting it wrong.
🤖 The strategic gap between fluency and fidelity
LLMs are brilliant linguists. Trained on billions of data points, they predict the next word in a sentence with eerie accuracy. But this means they’re built for fluency, not fidelity.
They’re designed to sound right, not to be right.
In their 2024 paper in Business Horizons, Hannigan, McCarthy, and Spicer introduce the term “botshit”: AI-generated content that looks insightful but lacks empirical grounding. It’s not malicious. It’s just misleading, especially when used to guide pricing, product, or go-to-market decisions. The danger isn’t just bad data. It’s the illusion of certainty.

🧠 Real research is about behavior, not just words
Executives don’t want poetic answers. They want predictive ones. Will this audience switch brands? Will they pay more? Will they churn?
These are behavioral questions. And answering them requires:
Population diversity (e.g. income, age, geography)
Situational constraints (e.g. switching costs, brand trust)
Comparative logic (how real people evaluate choices)
LLMs can’t model that. They aren’t designed to simulate consumer behavior, only to mimic how someone might talk about it.
✅ Choose the right AI for the right task
Botshit isn’t necessarily a flaw in the technology, it’s a mismatch between the design of the tool and the nature of the task.
AI can diagnose cancer.
AI can optimize supply chains.
AI can fly a drone across a war zone or land a rocket on a barge.
But none of that AI is ChatGPT.
And yet in market research, organizations are increasingly turning to ChatGPT-style tools to simulate consumers and make high-stakes decisions. Why run a traditional survey when a chatbot can generate an answer on the spot?
The problem? LLMs aren’t built to simulate behavior.
They’re built to generate language. That’s a crucial difference.
Using an LLM to model real-world consumer behavior is like using Google Translate to design your international product strategy: it may sound fluent, but it doesn’t capture how people actually think, choose, or act.

Instead of relying on prompted ChatGPT, we build enterprise-grade AI panels that simulate actual consumer decision-making, not just language patterns. Our agents don’t improvise. They think. They compare. They make trade-offs. They behave.
These are not chatbots.
They’re explainable agents, built with memory, logic, and demographic variation, and they help our clients:
Test product-market fit before launch
Understand what messages resonate with which segments
Simulate price sensitivity, churn likelihood, and adoption intent
It’s not about replacing human respondents.
It’s about replacing the silence, the moments where research should happen but never does.
And unlike typical GPT-based tools, our results are auditable, replicable, and explainable, essential for regulated industries and executive decision-making.
There’s nothing wrong with using LLMs. We use them too, to check survey questions for fluency, write message variations, and brainstorm action steps based on surveys. But we don’t use them to run market research. Why? Because generating a fluent paragraph isn’t the same as modeling behavior.
Here’s how to choose the right AI for the right job:
Brainstorming early-stage messaging? → Use LLMs
Simulating how people choose between options? → Use agent-based models
Testing message impact? → Combine LLMs with behavioral simulation
Creating a quick internal summary? → Sure, LLMs work well
But never confuse plausibility with proof.
❗Don’t buy a prompt
Many new AI research tools look sleek. But behind the scenes? They’re just prompted GPTs: user-friendly wrappers that send your questions to ChatGPT and return the output without validation or control.
If it’s not grounded in a population, behavioral model, or segment structure, it’s not research. It’s improv.
📎 Read ESOMAR’s guidelines on buying AI tools:
https://esomar.org/20-questions-to-help-buyers-of-ai-based-services
📎 Read how Lakmoos complies with ESOMAR questions in plain language.