Search Website
EBM as a Model for AI in Practice
2025-2026
Rapid advances in machine learning (ML) and other artificial intelligence (AI) methods used for automated decision-making are increasingly being applied in health care contexts. As such, medicine is a particularly salient space to assess the applied ethics and norms of AI tools as developed and used in practice.
This project investigates how the norms and practices of evidence-based medicine (EBM) can inform the design, development, and deployment application of AI systems, both in healthcare and more broadly. The paradigm of EBM has elicited considerable controversy and revision since its development in the late 1980s. We examine this discursive and practical genealogy to explore whether EBM is an appropriate analogous case to contemporary ML technologies, since EBM also involves the technologically mediated and socially complex application of epistemologically distinct forms of evidence to real-world, and often life-or-death judgements.
Our aim is to identify how the discourse around EBM can inform more responsible, good-faith applications of ML tools. The project explores fundamental questions about epistemology, evidence, and judgment, examining how decision-making can and should be informed by knowledge in the 21st century, and directly confronts the societal impacts of emerging technologies such as AI, both in medicine and beyond.
Faculty Members:
- PI: Luke Stark, Assistant Professor, Faculty of Information and Media Studies
- Benjamin Chin-Yee, Hematologist, London Health Sciences Centre; Assistant Professor, Department of Pathology and Laboratory Science, Schulich School of Medicine & Dentistry; Department of Philosophy, Faculty of Arts & Humanities
Trainees:
- Daniel Arauz Nunez, PhD Student, Faculty of Information and Media Studies