˾ֱ experts tackle questions about AI safety, ethics during panel discussion
What does safe artificial intelligence look like? Could AI go rogue and pose an existential threat to humanity? How have media portrayals of AI influenced people’s perceptions of the technology’s benefits and risks?
These were among the pressing questions tackled by four experts at the University of Toronto and its partner institutions – in disciplines ranging from computer science to philosophy – during a recent panel discussion on AI safety.
Sheila McIlraith, professor in ˾ֱ’s department of computer science at the Faculty of Arts & Science and Canada CIFAR AI Chair at the Vector Institute, said the notion of AI safety evokes different things to different people.
“Computer scientists often think about safety critical systems – the types of systems that we’ve built to send astronauts to the moon or control our nuclear power plants – but AI safety is actually quite different,” said McIlraith, an associate director at the ˾ֱ’s (SRI).
“For me personally, I have a higher bar, and I really think we should be building AI systems that promote human flourishing – that allow human beings to live with dignity and purpose, and to be valued contributors to society.”
The event, hosted by SRI in partnership with the , the , the and , invited McIlraith and her fellow panelists to discuss how AI technologies can be aligned with human values in an increasingly automated world.
They also discussed how risks surrounding the technology can be mitigated in different sectors.
Moderator, Karina Vold, assistant professor in the Institute for the History & Philosophy of Science & Technology in the Faculty of Arts & Science, noted that because AI systems operate “in a world filled with uncertainty and volatility, the challenge of building safe and reliable AI is not easy and mitigation strategies vary widely.”
She proceeded to ask the panel to share their thoughts on the portrayal of AI in popular culture.
“The media devotes more attention to different aspects of AI – the social, philosophical, maybe even psychological,” said Sedef Kocak, director of AI professional development at the Vector Institute.
“These narratives are important to help show the potential fears, as well as the positive potential of the technology.”
Roger Grosse, associate professor in ˾ֱ’s department of computer science in the Faculty of Arts & Science and a founding member of the Vector Institute, said that safety concerns around AI are not merely rooted in science and pop culture, but also in philosophy.
“Many people think that the public’s concerns regarding AI risks come from sci-fi, but I think the early reasoning regarding AI risks actually has its roots in philosophy,” said Grosse, who also holds Schwartz Reisman Chair in Technology and Society.
“If we’re trying to reason about AI systems that don’t yet exist, we don’t have the empirical information, and don’t yet know what their design would be, what we can do is come up with various thought experiments. For example, what if we designed an AI that has some specific role, and all of the actions that it takes are in service of the role?
“For the last decade, a lot of the reasons for being concerned about the long-term existential risks really came from this careful philosophical reasoning.”
The discussion also touched on the dangers of AI models misaligning themselves, how to guard against bias in the training of large language models, and how to ensure that AI models with potentially catastrophic capabilities are safeguarded.
“This [safeguarding] is an area where new research ideas and principles will be required to make the case,” said Grosse. “Developers saying, ‘Trust us’ is not sufficient. It’s not a good foundation for policy.”
Despite addressing topics surrounding potential harms and risks of AI, the panelists also shared their optimism about how AI can be wielded for the greater good – with Grosse noting AI offers the promise of making knowledge more widely accessible, and Kocak focusing on the myriad benefits for industries.
Watch the Sept. 10 conversation below: