Project Description

Home / Members / Faculty / Atrisha Sarkar

RESEARCH AREAS:

  • Human-AI interaction

  • AI & Society

  • Behavioural game theory

CONTACT:

ATRISHA SARKAR

Assistant Professor,
Department of Electrical and Computer Engineering, Western University

Atrisha Sarkar is an Assistant Professor in the Department of Electrical and Computer Engineering at Western University. Her research focuses on multiagent systems, behavioural game theory, and the development of computational tools to ensure the safety of human-centric AI systems at both individual and societal levels. Her multidisciplinary research has addressed a broad spectrum of topics, including ensuring safety in human-AI interactions for autonomous vehicles and mitigating polarization on online social platforms. She was also part of the team that built one of the first self-driving cars on Canadian roads. Atrisha’s research has appeared in peer-reviewed journals and conferences spanning areas such as artificial intelligence (AAAI, NeurIPS), robotics (IROS, ICRA), software engineering (ASE, ESE, SoSym), and institutional and organizational economics workshops (SIOE). She holds a Ph.D. in Computer Science from the University of Waterloo. Prior to joining Western, she was a postdoctoral fellow at the Schwartz Reisman Institute for Technology and Society at the University of Toronto. Before beginning her graduate studies, she spent eight years in the industry, primarily at IBM software labs.

Many Artificial Intelligence (AI) systems, from self-driving cars and social recommendation systems to LLM-based generative agents, are characterized by multiple humans and artificial agents interacting under conflicting incentives. These conflicting incentives require us to model how self-interested and potentially strategic behaviours correspond to desired or undesired outcomes in the larger system. Whereas these systems have a significant impact on both users and society at large, we need better computational tools at our disposal to comprehensively analyze the social effects of these transformative technologies. To that end, my past and ongoing research focuses on a few main aspects: > Impact of societal-scale AI systems: Many AI systems today have moved from their traditional function of improving economic productivity into the social realm. Large online platforms such as Facebook and Reddit, and social recommender systems such as AI-based news aggregation, can coordinate collective behaviour by strategically using information and shaping opinions. The absence of regulatory oversight of such powerful systems makes defining a system’s requisite correctness criteria challenging for practitioners. For example, although the design of social recommender systems can have far-reaching consequences, it is hard to translate the social externalities they can create, such as increased polarization and erosion of democratic values, into safety requirements using traditional methods. The goal of my research is to move beyond disciplinary silos to first create the necessary tools and models for analyzing the adverse impacts using computational methods, and second, help design policy proposals toward mitigating those adverse impacts. > AI-based simulation for social and economic decision-making: The advancement of large language model (LLM)-based generative agents has opened the door in behavioural sciences to study social behaviour by running experiments in simulated environments where micro and macro-level human behaviour can be simulated through LLM cognitive agents. Even though such simulations are in their nascency, there is an increasing interest in homo silicus as a methodological alternative to homo economicus, which has been the dominant methodological tool for formal economic and social analysis. However, just as there are limitations of the rational choice foundations of homo economicus as a model of social interactions, the objective of this research is to investigate the promises and pitfalls of homo silicus methodology for social and economic decision-making. > Human-agent interaction and individual safety: When AI-based decision-making agents, such as self-driving cars, engage in strategic reasoning with other human drivers, there is a possibility of incorrect judgment of the game and the equilibrium other human drivers follow, thereby creating a safety risk. Similar problems can also arise in today’s advanced driver assistance systems, in which human driving is transformed into a collaborative activity between human and AI decision-making. My research draws from methods in behavioural economics and traffic psychology to analyze safety aspects of such systems.

Economics + AI:
Sarkar, Atrisha, Gillian K. Hadfield. “Rational Silence and False Polarization: How Viewpoint Organizations and Recommender Systems Distort the Expression of Public Opinion”, * Under Review at Journal of Artificial Intelligence Research (JAIR); Earlier version presented at Society for Institutional and
Organizational Economics (SIOE) 2023*.

Multiagent Systems (AI):
(Working paper) Sarkar, A., Muresanu, A. I., Blair, C., Sharma, A., Trivedi, R. S., & Hadfield, G. K. (2024). Normative Modules: A Generative Agent Architecture for Learning Norms that Supports Multi-Agent Cooperation (Version 1). Foundation Models and Game Theory workshop at EC 2024

(Working paper) Rakshit Trivedi, Nikhil Chandak, Carter Blair, Atrisha Sarkar, Tehilla Weltman, Dylan Hadfield-Menell, Gillian K Hadfield. Altared Environments: The Role of Normative Infrastructure in AI Alignment Agentic Markets Workshop at ICML 2024

Sarkar, Atrisha, Kate Larson, and Krzysztof Czarnecki. “Revealed Multi-Objective Utility Aggregation in Human Driving.” arXiv preprint arXiv:2303.07435 2023 Autononous Agents and Multi-Agent Systems (AAMAS 2023) (acceptance rate 23%, full paper).

Sarkar, Atrisha, Kate Larson, and Krzysztof Czarnecki. “Generalized dynamic cognitive hierarchy models for strategic driving behavior.” arXiv preprint arXiv:2109.09861 2022 AAAI Conference on Artificial Intelligence (AAAI 2022) (acceptance rate 14.2%, main track).

Sarkar, Atrisha, and Krzysztof Czarnecki. “Solution Concepts in Hierarchical Games Under Bounded Rationality With Applications to Autonomous Driving.” Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 35. No. 6. (AAAI) 2021. (acceptance rate: 21%)

Sarkar, Atrisha, Kate Larson, and Krzysztof Czarnecki. “A taxonomy of strategic human interactions intraffic conflicts.” arXiv preprint arXiv:2109.13367, NeurIPS 2021 Cooperative AI Workshop.

Robotics:
Kahn, Maximilian, Sarkar, Atrisha, and Krzysztof Czarnecki. “I Know You Can’t See Me: Dynamic Occlusion-Aware Safety Validation of Strategic Planners for Autonomous Vehicles Using Hypergames.” arXiv preprint arXiv:2109.09807 . IEEE International Conference on Robotics and Automation (ICRA’22).

Sarkar, Atrisha, and Krzysztof Czarnecki. “A behavior driven approach for sampling rare event situations for autonomous vehicles.” 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2019.

Sarkar, Atrisha, et al., “Trajectory prediction of traffic agents at urban intersections through learned interactions.” 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC). IEEE, 2017.

Marko Ilievski, Sean Sedwards, Ashish Gaurav, Aravind Balakrishnan, Atrisha Sarkar, Jaeyoung Lee, Frédéric Bouchard, Ryan De Iaco, Krzysztof Czarnecki “Design Space of Behaviour Planning for Autonomous Driving.” arXiv preprint arXiv:1908.07931 (2019).

Software Engineering:
Sarkar, Atrisha, et al., “Cost-efficient sampling for performance prediction of configurable systems (t).” 2015 30th IEEE/ACM International Conference on Automated Software Engineering (ASE). IEEE, 2015. (acceptance rate: 19%)

Paulius Juodisius, Atrisha Sarkar, Raghava Rao Mukkamala, Michal Antkiewicz, Krzysztof Czarnecki, Andrzej Wasowski “Clafer: Lightweight Modeling of Structure, Behaviour, and Variability.” The Art, Science, and Engineering of Programming, 2019, Vol. 3, Issue 1, Article 2

Dina Zayan, Atrisha Sarkar, Michał Antkiewicz, Rita Suzana Pitangueira Maciel, Krzysztof Czarnecki” Example-driven modeling: On the effects of using examples on structural model comprehension, what makes them useful, and how to create them.” Software & Systems Modeling 18.3 (2019): 2213-2239.

Jianmei Guo, Dingyu Yang, Norbert Siegmund, Sven Apel, Atrisha Sarkar, Pavel Valov, Krzysztof Czarnecki, Andrzej Wasowski, Huiqun Yu “Data-efficient performance learning for configurable systems.” Empirical Software Engineering 23.3 (2018): 1826-1867.

Faculty Research Domains

Rotman Institute faculty members are listed below by shared research areas. Visit individual member profiles to learn more.