老司机直播

Schwartz Reisman Institute teams up with Canada School of Public Service to offer AI course to public servants

a brain made up of circuit board connections
(Illustration by Andriy Onufriyenko/Getty Images)

The University of Toronto's Schwartz Reisman Institute for Technology and Society has partnered with the Canada School of Public Service to teach federal public servants about artificial intelligence, a technology transforming sectors ranging from health care to law. 

More than 1,000 Canadian public servants have so far signed up for the online course's events so far. They include a mix of recorded lectures and moderated live panel discussions with scholars and industry leaders that are designed to explain what AI is, where it鈥檚 headed, and what public servants need to know about it. 

The eight-part series 鈥 called 鈥淎rtificial Intelligence is Here鈥 鈥 launched in November 2021 and runs through May 2022, with sessions delivered virtually in both English and French. It was developed by Gillian Hadfield, director of the Schwartz Reisman Institute (SRI) and a professor in the Faculty of Law, and Peter Loewen, SRI's associate director, director of the Munk School of Global Affairs & Public Policy and a professor in the department of political science in the Faculty of Arts & Science.

In addition to Hadfield and Loewen, the roster of speakers includes Avi Goldfarb, an SRI faculty associate and professor of marketing at the Rotman School of Management; Phil Dawson, SRI policy lead; and Janice Stein, political science professor and founding director of the Munk School of Global Affairs & Public Policy.

Panel discussions feature academic and industry experts: Wendy Wong, SRI research lead and professor in the department of political science; Cary Coglianese of the University of Pennsylvania's law faculty; Daniel Ho, a law and political science professor at Stanford University; and Alex Scott, business development consultant at Borealis AI.

The need for new regulatory approaches

One of the key topics explored in the course is the need for new regulatory approaches to AI tools. 

鈥淎I and machine learning are new technologies that are not like anything we鈥檝e seen before,鈥 said Hadfield in the series鈥 introductory session. 鈥淭he forms of AI that are transforming everything right now are systems that write their own rules. It is not easy to see or understand why an AI system is doing what it is doing, and it is much more challenging to hold humans responsible... That鈥檚 why figuring out how to regulate its uses in government, industry and civil society is such an important challenge.鈥

Increased regulation is essential to deal with the potential negative consequences of AI, such as bias and a lack of transparency, Hadfield added. Since AI's impact ripples across society, the development of AI systems shouldn't be left to computer scientists alone, she said. Policymakers should engage with AI and seek to understand it, Hadfield said.

鈥淚f AI is going to help us solve real human problems, we need more AI built to the specs of the public sector,鈥 she said. 鈥淲e鈥檒l need to get creative to make sure the AI we get is the AI we need.鈥

The centrality of consent and judgement

Another major challenge to the use of AI in government is public acceptance. 

In the series' second lecture, Loewen identified four key obstacles to the implementation of automated decision-making systems in public services:

  • Citizens don't support a single set of justifications for the use of algorithms in government.
  • A status quo bias causes citizens to hold a skeptical view of innovation.
  • Humans judge the outcomes of algorithmic decisions more harshly than decisions made by other humans.
  • Apprehension towards the broader effects of automation 鈥 especially concerning issues of job security and economic prosperity 鈥 can generate increased opposition to AI.

Since consent is fundamental to effective government, Loewen said these obstacles must be factored in for AI to be implemented in ways that meet with public approval.

Later in the course, Loewen delved into concerns around automation replacing human labour, demonstrating a wide range of cases in which AI would not only help governments better serve the public, but do so without replacing human workers.

In some contexts, the application of automated systems could help governments expedite decisions that are delayed due to capacity issues, enabling organizations to serve more people with greater speed and consistency.

In other areas, the use of AI could enhance the work of public servants by distinguishing between cases in which a verdict can be easily obtained, on the one hand, and contexts that require more nuanced consideration.

鈥淚sn鈥檛 it a potentially better use of resources if we take those who would have previously interacted with every case, and re-deploy them to situations which require more judgement 鈥 or maybe just more empathy?鈥 Loewen said.

What are the challenges of implementing AI in government?

The complexities of AI technologies and extensive roles and responsibilities of government mean there are many challenges to consider when putting AI to use in government: biased data inputs in machine learning models, concerns around data privacy and data governance, and questions regarding consent and procedural fairness 鈥 to name a few. 

Hadfield observes that, given the pace and scale of AI advancement, the sector will require innovative new tools and systems that can assess, monitor and audit AI systems to ensure that they are appropriately deployed, effective, fair, responsible and sufficiently contained within democratic oversight.

These challenges may seem immense, but so are the potential benefits, she said, when considering the positive impact AI could have in improving economic and social policies.

Schwartz Reisman Institute