Current Challenges for Generative AI in Market Research: Part 1

While generative AI has already shown great promise in market research, we must be honest about its current limitations. While extremely helpful to insights professionals in certain use cases, we still need to be aware of challenges posed and how they can be overcome. This two-part series will attempt to address these challenges and how AI companies like Qualibee.ai can work towards accurate, inclusive tools.

Volume of Data and Token Limits

Challenge

AI models, such as OpenAI's GPT series, while adept at content creation, encounter bottlenecks when trying to analyze large datasets. This limitation often manifests as a 'token limit', constraining the amount of data processed in one instance. GPT-4, for example, is currently limited at just over 8,000 tokens with GPT-4-32k hitting its limit at 33,000.

As most qualitative research studies involve multiple transcripts from interviews, focus groups, or online discussion boards, generative AI models (currently) can’t handle the volume of data in their native state. This requires other technical solutions to refine the data to a format in which it can be analyzed.

Solution

While large datasets could be processed by batch, summarized, and then summaries compared against each other, important insights could be lost if they make up only a small portion of a particular batch. Potential issues with using a summarization method include the following.

  • Loss of Granularity: By summarizing or batching data, intricate details, which might be of profound significance, could be lost.

  • Potential Misinterpretations: Summaries might not capture the true essence or nuance of a dataset, leading to skewed interpretations when these summaries are compared.

  • Interlinked Insights: When data sources or documents reference each other, a segmented analysis can miss interconnected data points causing fragmentary or disjointed conclusions.

To overcome these limitations while not missing key insights, Qualibee.ai takes a user’s search prompt and loops it over all documents in the set. The response is continually refined as more and more results are added ensuring the final output has taken the full context into account. By dealing with a single document at a time, Qualibee.ai ensures a more comprehensive analysis of large datasets without sacrificing crucial details.

 

The Multifaceted Challenge of Cultural Bias

Challenge

One of the foundational aspects of generative AI is the data it is trained on. This vast repository of information shapes subsequent interpretations and responses of the AI system. However, biases entrenched within this data, particularly those that exclude or misrepresent cultural nuances, pose significant challenges. These biases, whether they stem from historical data, skewed datasets, or inadvertent oversights, can have wide-reaching implications.

Lacking cultural insights from DEI data or relying on a biased training dataset means that AI might produce insights that don’t represent our diverse reality. This myopia can lead to several adverse outcomes:

  • Misrepresentation of Populations: AI might end up underserving or entirely neglecting the needs, preferences, and behaviors of certain demographic groups.

  • Flawed Market Strategies: Decisions based on skewed insights can lead organizations astray, causing potential financial losses and reputational damages.

  • Perpetuation of Stereotypes: Relying on biased AI outputs can reinforce harmful stereotypes, further alienating marginalized communities.

Solution

To avoid these biases, we must be careful in finding solutions. Including DEI experts in the process is crucial.

  • Incorporate DEI Data: Proactively integrating diverse datasets that encompass varied cultural, gender, and regional insights is crucial. This ensures that the AI system is exposed to a broad spectrum of perspectives and realities.

  • Human-led Evaluations: Pairing AI-generated insights with human-led evaluations, especially from experts with a deep understanding of DEI considerations, can act as a corrective measure. These evaluations can identify and rectify biases, ensuring research findings are comprehensive and equitable. Results of these evaluations can also be applied to the models themselves to improve future output.

  • Ongoing Training and Refinement: AI should be viewed as a constantly evolving tool. Continual refinement, informed by diverse datasets and feedback loops, ensures that the system remains updated and is progressively purged of biases.

Data Quality and Respondent Selection

Challenge

When it comes to AI powered interviews, the quality and depth of insights derived are intrinsically linked to the precision of inputs and the richness of respondent feedback. Often, if respondents are primarily oriented towards short-form surveys, their responses may lack the granularity and depth required for useful research. The challenge is further exacerbated when the identity of the respondents is difficult to verify. Answers from respondents who aren’t who they claim to be will make a study actively harmful to a company if the results lead towards the wrong interpretation.

Solution

Prioritizing respondent recruitment from qualitative sample providers presents a multifaceted solution. First and foremost, these respondents are typically verified, ensuring a degree of authenticity and reliability in their feedback. Their familiarity with qualitative methodologies also means they are predisposed to providing longer, more detailed responses.

An interesting phenomenon that is currently being explored in healthcare shows people finding chatbots to be non-judgmental entities. This has led to some expressing themselves more freely than with another human. More research should be conducted to see if similar results can be replicated for AI interviews.

Incorporating these strategies can dramatically enhance the depth, authenticity, and relevance of AI-driven insights in market research.

Conclusion

As seen above, we cannot take the human out of the AI driven process. We must constantly be questioning and evaluating not only the AI models we use, but also ourselves! Remember that AI biases are ultimately based on data created by our own.

Tune in next week for the next series of challenges and potential solutions!

Previous
Previous

Current Challenges for Generative AI in Market Research: Part 2

Next
Next

The Complementary Roles of AI-Driven Qualitative Platforms and Quantitative Surveys