When AI Can—and Can’t—Be Trusted in Market Research
In a SEED Global Education masterclass, UIC Business marketing professor Alexander Moore explores how large language models are reshaping market research and why human validation remains essential.
When AI Can—and Can’t—Be Trusted in Market Research
Artificial intelligence can now analyze thousands of survey responses, product reviews, and social media posts in seconds. But as these tools become more powerful, market researchers must ask a critical question:
When can AI be trusted to interpret human behavior?
That question was the focus of a recent SEED Global Education masterclass led by Alexander K. Moore, assistant professor of marketing at the University of Illinois Chicago College of Business Administration.
Moore’s research examines how people search for products and information online and how large language models can be used in social-science research. In the session, he discussed how these tools are reshaping the way market researchers analyze consumer data.
“The key trade-off is that AI simulations give you speed and low cost in exchange for lower confidence,” Moore noted. “It’s really good for early stages or low-stakes decisions, but often bad for high-stakes decisions unless you’re willing to validate.”
The virtual session, Large Language Models in Market Research: Opportunities, Limitations, and Guidelines, drew 1,046 registrants and 527 live attendees from around the world, making it the largest SEED masterclass hosted by UIC Business to date.
Discovery vs. Measurement
A central theme of Moore’s presentation was the distinction between using AI for discovery versus measurement.
Discovery involves asking an AI model to simulate how a customer might react to a new product. Because researchers are asking for something unknown, it is difficult to confirm whether the output is a genuine insight or just an “artifact” of the model without checking against real customers.
Measurement tasks, such as coding data for sentiment or sincerity, are better suited for AI because the researcher already knows what they are looking for. These tasks allow for a rigorous “Sample and Test” process that leverages human judgment to verify AI accuracy:
- Sample: Researchers pull a small, random sample from a large dataset (e.g., 200 out of 50,000 social media posts).
- Test: Humans code that small sample while the AI codes the same set independently.
- Compare: If the AI matches human judgment (Moore reports agreement as high as 90%), researchers can apply the tool to the larger dataset with high confidence.
Why Validation Matters
Even when AI performs well, Moore emphasized that researchers must carefully evaluate its outputs.
Because large language models generate probabilistic responses, results can vary slightly across runs. To ensure accuracy, researchers often compare AI-generated classifications with human-coded benchmarks and test models using separate development and validation datasets.
These steps help ensure that AI-assisted analysis remains transparent and scientifically reliable.
Researchers must also consider broader issues that accompany the use of AI tools. Replicability can be difficult when models change over time, and sensitive consumer data must be handled carefully when using external AI systems.
Global Reach
The masterclass drew a global audience interested in the intersection of artificial intelligence and business research, particularly how tools such as large language models are reshaping research methods across disciplines.
The strong international participation reflects the global reach of the SEED Global Education platform. Approximately 46 percent of registrants identified as female.
The masterclass was part of an ongoing collaboration between SEED Global Education and the University of Illinois Chicago College of Business Administration that introduces prospective students to UIC Business faculty and emerging topics in business and technology.
Moore emphasized that while AI tools can dramatically accelerate research, researchers and practitioners alike must still apply careful validation and human judgment when interpreting results.
Watch the full lecture on YouTube.