To integrate or not to integrate AI for critical judgments: How professionals deal with opacity when using AI for medical diagnosis
Artificial intelligence (AI) technologies promise to transform how professionals are conducting knowledge work and augment their capabilities and professional judgment. However, we know little about how human-AI augmentation actually unfolds in practice. Such investigation is particularly important when professionals use AI tools to form judgments on critical decisions. We conducted an in-depth field study in a major US hospital where AI tools were being used within three different radiology departments forming critical judgments: diagnosing breast cancer, lung cancer, and bone age. The study illustrates the hindering effects of opacity when using AI tools for making critical decisions and how professionals grapple with it in practice. In all three departments, professionals experienced a surge in uncertainty due to the opacity of the AI tools’ results, which often conflicted with their initial diagnosis, yet provided no insight into its underlying reasoning or logic. Only in one department (of the three), did professionals consistently incorporate AI results into their final judgments, achieving what we call engages augmentation. These professionals developed and enacted AI interrogation practices that allowed them to make sense of and validate AI results, despite the AI-in-use opacity. The other two departments had not developed such practices and did not incorporate AI inputs, which we call un-engaged “augmentation”. Our study unpacks the challenges involved in augmenting professional judgment with powerful, yet opaque, technologies and contributes to literatures on opacity in AI, the adoption of new technologies, and the production of knowledge.