(Dis)Trust and AI: Perspectives from Across Disciplines and Sectors
26 October 2023 - 28 October 2023
Concerns about trust, distrust, and artificial intelligence (AI) are on the rise as our societies become more and more exposed to these technologies. This event will focus on pressing questions under the general theme of (dis)trust and AI and bring together people working in different academic disciplines and sectors of society to collaborate in answering them. Among the disciplines represented will be philosophy, information and media studies, engineering, and law, and among the sectors: clinical medicine, government, and the digital technology industry.
Here are examples of the kinds of questions we will address:
- If trust and reliance are distinct (as most philosophers believe they are), then who or what can be trusted in the context of AI development and deployment? Can AI systems be relied upon but perhaps not trusted? Does it make sense to say that AI itself, rather than the developers or users of it, can be trusted? How should ‘trust’ be understood here?
- Why does it matter, if at all, that we can trust in AI or in the developers and users of it? What value might trust have, as opposed to mere reliance?
- What are some of the dangers of misplaced trust in this context? Relatedly, what is the value of distrust?
- How could the deployment of AI diminish or enhance people’s trust in public institutions?
Our main objective will be to make progress on the questions we’ve identified in a way that is both transdisciplinary and meaningful to people who are AI regulators, developers, or users. A further objective will be to enrich public debate and literacy on AI by engaging the public on the theme of our event.
The event will take place over three days and will have three parts:
- October 26, Public Panel: a few of our experts will participate in a public evening panel that will be part of a London Public Library series in AI ethics, organized by Western’s Department of Philosophy and the Rotman Institute of Philosophy.
- October 27, Collaborative Roundtable: all participants will gather for a full day of discussion about predetermined questions that concern (dis)trust and AI. Participants that we have designated as “speakers” will start off the discussion about each question or set of questions.
- October 28, Work-in-Progress Workshop: some participants will ‘workshop’ pre-circulated papers in AI ethics that are works in progress.
Although in-person attendance is by invitation only, the Friday Collaborative Roundtables will be streamed online via Zoom Webinar. Those attending virtually will be able to offer questions for the panelists using the Chat function. Please register in advance at the following link: https://westernuniversity.zoom.us/webinar/register/WN_ZCcVpowDQ1OlXVsdxG5P0g
After registering, a confirmation email will be sent to you containing information about joining the Webinar.
ORGANIZING COMMITTEE:
- Carolyn McLeod (Rotman Institute of Philosophy, Department of Philosophy, Western)
- Luke Stark (Rotman Institute of Philosophy, Faculty of Information and Media Studies (FIMS), Western)
- Jacquie Burkell (Rotman Institute of Philosophy, FIMS, Western)
- Emily Cichocki (Rotman Institute of Philosophy, PhD Candidate Philosophy, Western)
- Jason Millar (Engineering, CRC in Ethical Engineering of Robotics and AI, University of Ottawa)
Speakers for the Friday Collaborative Roundtables
- Denise Anthony: Department Chair and Professor of Health Management & Policy in the School of Public Health, University of Michigan. Her research focuses on how social dynamics of cooperation, trust, and privacy shape behavior related to information technologies, and the implications for data produced and shared via those technologies.
- Rob Arntfield: Associate Professor of Medicine, Director of Critical Care Ultrasound Program & Fellowship, Schulich School of Medicine & Dentistry, Western University. He initiated an AI and ultrasound program called “Deep Breathe” (deepbreathe.ai), which aims to unlock new possibilities for physicians treating or patients suffering from respiratory problems.
- Andrew Buzzell: Postdoctoral Researcher at the Rotman Institute of Philosophy at Western. His research concerns questions at the intersection of technology, ethics, and epistemology, and he has contributed to projects involving misinformation and disinformation in social media systems, large language models, big data and public health ethics, and the application of AI in human resources context.
- Alissa Centivany: Assistant Professor in the Faculty of Information and Media Studies at the University of Western Ontario. Her research focuses on technology policy, law, and ethics. She is co-founder and co-director of Tesserae: The Centre for Digital Justice, Community, and Democracy at Western University. She is part of a SSHRC-funded team studying the use of AI to address housing and homelessness (PI: Stark) and also co-organized the SSHRC-funded “Big Data at the Margins” series.
- Benjamin Chin-Yee: Physician in the Division of Hematology at Western University and PhD candidate and Gates Scholar in the History and Philosophy of Science at King’s College, Cambridge University. His current research focuses on the impact of big data, genomic medicine, and artificial intelligence in clinical medical practice.
- Anthony D’Amato: Chief Engineer, ADAS Component and Systems Engineering at Ford Motor Company. His various roles in research engineering at Ford Motor Company focus on autonomous vehicle systems and advanced driver assistance systems, including the safety and regulatory compliance of such systems
- Jason D’Cruz: Associate Professor and Director of Undergraduate Studies in the Philosophy Department at the University at Albany, State University of New York. He is an expert in the philosophy of trust and distrust and is currently Principal Investigator for “Trustworthy AI from a User Perspective,” an interdisciplinary project focused on how AI systems make assessments of human trustworthiness and on how we ought to characterize ‘trustworthy AI.’
- Mark Daley: Special Advisor to the President on data strategy at Western University. He previously served as Western’s Associate Vice-President (Research). Daley holds joint faculty appointments in six departments and is a member of the Brain and Mind Institute and the Rotman Institute of Philosophy, an associate scientist at the Lawson Health Research institute and a faculty affiliate of the Vector Institute for Artificial intelligence.
- Colleen Flood: Dean of Law at Queen’s University (as of July 1, 2023); previously, Professor of Law and University Research Chair in Health Law & Policy at the University of Ottawa. She has a CIHR-funded project that explores how AI used in healthcare may incorporate biases that adversely impact marginalized communities, and whether existing laws and regulations are sufficient to address these concerns.
- Trystan Goetze: Senior Lecturer in Ethics of Engineering at Cornell University, Director of the Sue G. and Harry E. Bovay Program in the History and Ethics of Professional Engineering and affiliate of the Sage School of Philosophy. They worked with Athabasca University and Ethically Aligned AI, Inc., to create a series of AI ethics micro-credential courses. Their current projects are on trust and Big Tech, epistemic blame, AI ethics, and computer science education.
- Jasmine Gunkel: Postdoctoral researcher in bioethics at the National Institutes of Health, and soon to be Assistant Professor in the Department of Philosophy at Western. Her research focuses on intimacy. She’s concerned with whether it’s possible to have intimate relationships with AI, and if it is, whether it’s then reasonable to trust AI.
- Min Kyung-Lee: Assistant Professor in the School of Information at the University of Texas at Austin. She has conducted some of the first empirical studies examining the societal implications of AI use in the management and governance of society. Her expertise extends to developing theories, methods, and tools for human-centered AI and in deploying the tools through collaborations with real-world stakeholders and organizations.
- Susie Lindsay: Counsel at the Law Commission of Ontario where she leads the AI and the Civil Justice System project. She is principal author of the LCO’s report, “Accountable AI,” and she also runs the LCO’s joint initiative on AI and Human Rights with the Canadian Human Rights Commission. The latter involves the design of a human rights AI assessment tool for developers and administrators of AI systems.
- Soodeh Nikan: Assistant Professor in Electrical and Computer Engineering. Her research focuses on developing explainable artificial intelligence (AI) strategies for autonomous vehicles, with learning abilities that imitate human brain perception and reasoning. The goal of this research is to design robust, unbiased and industry-standardized AI-based platforms for intelligent and safe mobility which can be extended to other vision-based applications.
- Anabel Quan-Haase: Professor of Information and Media Studies and Sociology at the University of Western Ontario. Her interdisciplinary work and teaching focuses on understanding the effects of technology on society. Her research projects examine how young people use instant messaging, Facebook, mobile phones and other communication tools and what the social consequences are for their social relations, community, and social capital.
- Joanna Redden: Associate Professor in Faculty of Information and Media Studies at Western University and co-director of Data Justice Lab. Her research focuses on the datafication of governance. She investigates how government information systems and service provision are changing, the democratic implications of changing government practices and how these transformations are affecting people. Her previous work has investigated the relationships between digital media and poverty politics
- Heather Stewart: Assistant Professor in the Department of Philosophy at Oklahoma State University. Her research lies at the intersection of feminist philosophy, political philosophy, digital ethics, and bioethics with a specific interest in harmful speech phenomena, health disparities, and social dis/trust. She is also a research fellow for the OSU Humanities Initiative on issues in “Digital Humanities”.
- Vanessa Thomas: Senior Advisor of Technology Review with the National Security and Intelligence Review Agency’s Technology Directorate, Government of Canada. They conduct technical investigations of digital systems, draft reports that explain advanced and novel technical subjects, and assess the risks associated with various IT systems, including ones that employ automated decision making.
“(Dis)Trust and AI: Perspectives from Across Disciplines and Sectors”
THURSDAY, OCTOBER 26
PUBLIC LIBRARY PANEL
London Central Public Library – Stevenson & Hunt Room
7:00-8:30pm—Can we enhance AI ‘trustworthiness’ through regulation?
Can AI’s design and use be regulated to make it more reliable or trustworthy—assuming AI can even be trustworthy, which is a subject of some philosophical debate? Discussions about the need to regulate AI are happening furiously right now. This public panel will focus on how AI could or should be controlled through regulation.
Moderator: Jason Millar
Presenters: Susie Lindsay, Vanessa Thomas, Anthony D’Amato
FRIDAY, OCTOBER 27
COLLABORATIVE ROUNDTABLES
Ivey Spencer Leadership Centre
8:30-9:00am—Welcome and Introductions
9:00-10:15—Can we trust AI?
Does it make sense to say that AI itself, rather than the developers or users of it, can be trusted? This roundtable will focus on whether it’s possible to trust in existing or prospective AI systems, and why it might matter one way or the other whether we use the term ‘trust’ in this context as opposed to the closely related term of ‘reliance.’
Moderator: Carolyn McLeod
Speakers: Ben Chin-Yee, Min Kyung Lee, Trystan Goetze
10:15-10:45—BREAK
10:45-12:00—What does reliable or ‘trustworthy’ AI look like?
How should we understand the reliability or ‘trustworthiness’ of AI? This roundtable will consider by what metrics or standards people (e.g., citizens or policymakers) should assess AI reliability or trustworthiness.
Moderator: Jacquie Burkell
Speakers: Andrew Buzzell, Alissa Centivany, Heather Stewart
12:00-1:15—LUNCH
1:15-2:30—What impact can AI have on (dis)trust in human relationships, especially patient-physician relationships?
This discussion will take us beyond the issue of how or whether AI can be trusted to explore how AI can have a mediating influence on interpersonal trust relationships, including those between clients or patients and professionals. AI systems can have a substantial impact on these relationships. The focus here will be on examples from the medical realm.
Moderator: Dan Lizotte
Speakers: Denise Anthony, Rob Arntfield, Colleen Flood
2:30-3:45—Should we rely on AI assessments of human trustworthiness?
Predictive algorithms are increasingly being used by public institutions to help make decisions that are often life-changing for members of the public, including whether someone will get a bank loan or receive government services for homelessness. AI systems that guide institutional representatives in making such choices appear to be rating people’s trustworthiness. This roundtable will focus on this phenomenon.
Moderator: Emily Cichocki
Speakers: Jason D’Cruz, Joanna Redden, Mark Daley
3:45-4:15—BREAK
4:15-5:30—Should trust in AI be elicited through AI design?
Even if AI can’t be trustworthy because it lacks the capacities that would make it trustworthy, developers could still make AI seem trustworthy to make it more attractive to users. The goal here could be profit, but it could also be promoting human needs such as avoiding loneliness. This discussion will centre on the ethics of promoting intimacy with and ultimately trust in AI.
Moderator: Luke Stark
Speakers: Jasmine Gunkel, Anabel Quan-Haase, Soodeh Nikan
5:30-6:00—BREAK
6:00-7:00—COCKTAILS
7:00—DINNER
SATURDAY, OCTOBER 28
WORK-IN-PROGRESS WORKSHOP
Ivey Spencer Leadership Centre
9-10am— concurrent sessions
Mark Daley, “The unimagined preposterousness of counterfeit people”
Discussant: Michael Anderson
Luke Stark, Nathalie DiBerardino, Clair Baleshta, “Algorithmic Harms and Algorithmic Wrongs”
Discussant: Heather Stewart
10-10:15am— COFFEE BREAK
10:15-11:15am— concurrent sessions
Alissa Centivany, “Legal and Ethical Considerations for Generative AI”
Discussant: Emily Cichocki
Andrew Buzzell, “Shadow Ethics: The Promise of AI Mitigation of Bias in Hiring”
Discussant: Daniel Arauz Nunez
11:15-11:30am— BREAK
11:30am-12:30pm— concurrent sessions
Emily Cichocki, “Solidarity in the Digital Age”
Discussant: Trystan Goetze
Pinar Barlas, “A Critical Look at Critical Care Data: MIMIC-IV’s Construction, Contents, & Potential Consequences”
Discussant: Max Smith
12:30am-1:30pm— LUNCH
1:30-2:30pm— concurrent sessions
Trystan Goetze, “AI Art is Theft”
Discussant: Alissa Centivany
Nathalie DiBerardino, “AI and Homelessness”
Discussant: Danica Pawlick Potts
2:30pm— thanks and closing
The Friday and Saturday of our event will be held at the Ivey Spencer Leadership Centre, whereas the Thursday evening public panel will take place at the London Public Library, Central Branch (251 Dundas Street, London, ON N6A 6H9) in the Stevenson & Hunt Room.
For attendees travelling from a distance that requires air travel:
Options for travel:
1) fly into Toronto Pearson Airport (YYZ) and then on to the London International Airport (YXU);
2) fly to Pearson and then take the UP Express train to Union station in Toronto and then VIA Rail to London;
3) fly to Pearson and then take the Robert Q shuttle from Pearson to London;
4) fly to Pearson landing roughly at the same time as 1-2 other presenter(s) and then take a cab with them to London;
5) for those travelling from the U.S., fly into Detroit Metro Airport and take the Robert Q shuttle from Detroit to London. This is typically the cheapest and easiest option if you are travelling from the U.S.
We will reimburse cab fares as well as plane, shuttle, or train fares.
Local travel in London, ON:
- We will have taxis and/or a shuttle available to take attendees from the Ivey Spencer Leadership Centre to the London Public Library, Central Branch for the Thursday evening panel at 7pm.
Accommodations:
- We have reserved hotel rooms for out-of-town attendees at the Ivey Spencer Leadership Centre.
- Check-in is Thursday Oct. 26 at 3pm, and check-out is Sunday Oct. 29 at 11am.
Meals:
- Meals will be provided at the Ivey Spencer Leadership Centre.
- Breakfast: 8am on Friday, Saturday, and Sunday morning
- Lunch: 12pm on Friday and Saturday
- Dinner: 5pm on Thursday night; 6pm on Friday and Saturday night
- Refreshments and snacks will be available throughout the day, Friday and Saturday.
Local attendees (not travelling from out of town):
- Lunches will be provided on both Friday and Saturday at the Ivey Spencer Leadership Centre.
- We invite all attendees to join us for dinner on Friday evening at 6pm. This will also be hosted by Ivey Spencer.
- Refreshments and snacks will be available throughout the day, Friday and Saturday.