Aug 13, 2025
Large language models (LLMs) like GPT‑4, Claude, and Gemini are becoming the backbone of many AI systems. They can generate text, translate languages, simulate conversations. Increasingly, companies are exploring their use in consumer research.
But here’s a critical question:
Can LLMs actually simulate real human behavior across cultures or are we just generating polished, Western-sounding text?
A new study supports the latter. And it’s one that supports why Lakmoos does not rely on LLMs to simulate consumers, especially those from underrepresented or niche populations.
A 2024 study titled Cultural Fidelity in Large‑Language Models tested leading AI models across 21 countries, using 94 culturally sensitive questions based on the World Values Survey.
Key findings:
LLMs consistently reflected the values and communication styles of English-speaking, Western societies.
In countries with lower digital content availability, error rates were up to 5× higher.
Models aligned better in countries with more online data, confirming that LLMs learn from what’s abundant, not what’s diverse.
Bottom line:
Even when prompted in local languages, LLMs often simulate English-speaking logic dressed up in translation.
Why This Matters for Consumer Research?
Responding to a customer in Germany is not the same as responding to one in Japan or Saudi Arabia. Same issue, different expectations, emotional cues, and norms.
LLMs are trained on massive web data. But most of this content comes from English-language websites and forums. So what they learn as "common sense" is actually Western internet logic. That’s fine for writing essays. But when you’re trying to simulate real consumer behavior, it creates invisible failures.
Most companies experimenting with AI for research are optimizing for speed, cost, and fluency. But when you’re doing real-world consumer work (especially with niche or international audiences) these are not enough.
Why? Because language isn’t the same as worldview.
For example:
An “apology + discount” approach might satisfy a customer in the UK.
In Japan, that same response could be seen as dismissive and offensive.
In the UAE, it may unintentionally signal pity instead of respect.
Same AI. Same language. Completely different outcomes.

Why Lakmoos Doesn’t Rely on LLMs for Simulating People
At Lakmoos, we simulate human behavior using synthetic respondents, but we don’t use LLMs to do it. Here’s why.
LLMs are trained to mimic text patterns, not people. They generate statistically likely responses based on the dominant content available online. That works well for general knowledge tasks. But when it comes to modeling how people think, decide, and behave, it falls shor, especially for underrepresented or complex audiences.
Our approach uses neuro-symbolic AI: a hybrid method that combines logic rules, behavioral science, and high-quality data from real humans. Instead of guessing based on scraped text, we simulate decision logic, tuned to local norms, values, and mental models.
This means we can simulate:
A Gen Z renter in Warsaw thinking about energy tariffs.
A middle-aged B2B buyer in Riyadh evaluating a fleet offer.
A Nigerian professional considering switching mobile providers.
Not just the words they’d use but the reasoning behind them.
Language ≠ Logic.
Fluency ≠ Fidelity.
LLMs are good at talking. But simulating trust, identity, pride, shame, aspiration, that’s something else entirely.
If you want to understand what people really think, not just what sounds correct, you need to go deeper than a language model. You need to simulate the full context of the human mind: values, incentives, lived experience. This is what Lakmoos was built for.
Companies that treat LLMs as global substitutes for human understanding risk flattening human diversity into a single cultural norm. The future of research won’t come from asking one AI model the same question in 35 languages. It will come from tools that are aware of their own biases, and designed to work across cultures, not around them.
At Lakmoos, we believe synthetic data should reflect the complexity of human life. That’s why we’ve built our systems for accuracy, transparency, and cultural fidelity from day one.