Rotman 2025 Think Tank Award Recipients

Spring 2025 marks a new chapter for the Rotman Institute of Philosophy as we continue to champion bold, interdisciplinary research through the Rotman Interdisciplinary Think Tank Award Program. Now in its second year, the program invites core and associate faculty to propose visionary projects that address complex research challenges aligned with the Institute’s evolving strategic priorities.

Building on the momentum of its inaugural launch, the program continues to support collaborative initiatives that: 

  • Advance thematic research across disciplines
  • Foster cross-sector engagement and innovation
  • Generate transformative knowledge and real-world impact
  • Catalyze strategic outcomes and major grant opportunities

Think Tank projects are designed to elevate research visibility, strengthen interdisciplinary networks, and position teams for long-term success. In 2025, the Institute reaffirms its commitment to four key areas of focus:

  • Knowledge in the 21st Century
  • Societal Impact of Emerging Technologies
  • Issues in Public Health
  • Foundational Questions in Physics and the Human Brain Sciences

Without further ado, our 2025 Think Tank Award recipients are…

 

Pauline Barmby, Rob Corless, Carolyn McLeod, & Luke Stark

 

Our congratulations are extended to their teams: Michael will be working with Sarah Gallagher, Denis Vida, Valerie Oosterveld, and Eric Desjardins; Rob will be working with Bill Turkell; Carolyn will be working with Jacquelyn Burkell, Joanna Redden, Clair Baleshta, Corinne Lajoie, and Andrew Richmond; and Luke will be working with Benjamin Chin-Yee.

 

Read below to learn more about their research proposals with exciting work planned over the next year! 
Humans have been launching satellites into space for nearly 70 years, for purposes including communications, defense, science, and navigation. The number of satellites in orbit has increased dramatically in the last few years, mostly due to large groups called satellite constellations; the Starlink constellation is one example. Satellite constellations bring potential for substantial benefits, for example in communications to remote areas and remote monitoring of the Earth, but also for substantial harms, for example in polluting the Earth’s environment through greenhouse gases and other substances released during launches, satellites that re-enter the atmosphere, interfering with astronomical observations from the ground, or even preventing all future space launches through production of orbiting space debris. Governments, companies, international scientific organizations, individuals, and future societies are all affected by satellite constellations. In this project, the team brings together experts on environmental philosophy, astronomical observations, space technology, and space law to examine the potential and problems of satellite constellations and possible areas for policy recommendations to governments.
The history of computer algebra and symbolic computation is one key to understanding how computational tools extend human cognition. Early developments were explicitly connected to artificial intelligence, with a strong emphasis on the idea of hybrid human-machine thinking. This perspective suggests that true understanding is linked to the ability to teach—and, crucially, to teach machines. Studying the history of these tools is not just about their function but what their development reveals about the limits of computation and, and by extension, hybrid thinking itself. Computer algebra has enabled conceptual breakthroughs that might have been otherwise unimaginable, such as proving undecidability. Its role in learning has been equally critical—more powerful tools expanded intellectual possibilities, but only as education evolved alongside them. However, it remains uncertain whether the hard-won insights from decades of work in computer algebra have been widely recognized or have achieved cultural stability. With large language models reshaping computation, studying this history is urgent to ensure that past lessons continue to inform the future of human-computer interaction and mathematical discovery in the twenty-first century.
With Artificial Intelligence (AI) comes great opportunity but also great risk. It can be used in ways that enhances information systems, efficiency and working practices. However, it is also accompanied by substantial risks and uncertainties, including concerns about data privacy, equity and bias, accessibility, and governance. Public school boards are currently interested in making use of emerging AI technologies to help cope with complex student needs and seriously limited budgets. This Think Tank project collaborates with school boards, such as TVDSB, to develop research-informed guidance on decision making about if, where, and how AI can be ethically and equitably integrated into (secondary) public education. Issues the team explore include the normalization of dependence on AI tools, misinformation about their strengths and limitations, the potential harms that these systems can produce, and social justice considerations regarding belonging and alienation in educational environments that are increasingly impacted by AI and related technologies.
Medicine is a particularly salient space to assess the applied ethics and norms of AI tools as developed and used in practice. This Think Tank project investigates how the norms and practices of evidence-based medicine (EBM) can inform the design, development, and deployment application of AI systems, both in healthcare and more broadly. The team examines this discursive and practical genealogy to identify ways in which changes to EBM might support and model changes in the good-faith application of ML tools. This work contributes to the broader literature in algorithmic/AI governance, to studies of how human values are expressed through socio-technical systems, and to the emerging literature on pedagogy around AI in medical contexts.