The Risks of Using Artificial Intelligence in Scientific Research: Illusions of Understanding and Monocultures of Knowing
Yale anthropologist warns of risks in using AI to enhance scientific research
Artificial intelligence (AI) has been lauded for its potential to revolutionize scientific research and enhance productivity. However, a new paper co-authored by a Yale anthropologist raises concerns about the risks associated with the widespread use of AI in scientific inquiry.
The paper, published in Nature, highlights the potential for AI to narrow scientists’ perspectives, limit the questions they ask, and restrict the experiments they perform. This, in turn, could lead to what the authors describe as “illusions of understanding,” where researchers believe they comprehend the world better than they actually do.
Co-authored by Lisa Messeri and Princeton cognitive scientist M. J. Crockett, the paper outlines four archetypes of AI applications in scientific research: “AI as Oracle,” “AI as Surrogate,” “AI as Quant,” and “AI as Arbiter.” These applications range from assisting in study design to evaluating scientific studies for merit and replicability.
The authors caution against treating AI as a trusted partner in the production of scientific knowledge, emphasizing the importance of maintaining a diverse range of perspectives and approaches in research. They warn that relying too heavily on AI tools could lead to “monocultures of knowing,” where researchers prioritize questions and methods best suited to AI over other modes of inquiry.
Messeri and Crockett also highlight the social implications of AI in scientific research, stressing the importance of considering the broader impact of AI beyond the laboratory. They argue that diversity in scientific perspectives is essential for robust and creative research, and that replacing diverse standpoints with AI tools could hinder progress in the field.
Overall, the paper calls for a thoughtful and critical approach to the use of AI in scientific research, urging scientists to consider the potential risks and limitations of relying too heavily on AI tools. By maintaining a balance between AI and human perspectives, researchers can ensure that scientific knowledge continues to advance in a meaningful and inclusive way.