Krystle Hewitt / en 老司机直播 initiative encourages computer science students to incorporate ethics into their work /news/u-t-initiative-encourages-computer-science-students-incorporate-ethics-their-work <span class="field field--name-title field--type-string field--label-hidden">老司机直播 initiative encourages computer science students to incorporate ethics into their work</span> <div class="field field--name-field-featured-picture field--type-image field--label-hidden field__item"> <img loading="eager" srcset="/sites/default/files/styles/news_banner_370/public/2024-04/GettyImages-1063724556-crop.jpg?h=81d682ee&amp;itok=8N5uArHf 370w, /sites/default/files/styles/news_banner_740/public/2024-04/GettyImages-1063724556-crop.jpg?h=81d682ee&amp;itok=SuP6_Tgs 740w, /sites/default/files/styles/news_banner_1110/public/2024-04/GettyImages-1063724556-crop.jpg?h=81d682ee&amp;itok=bU01W_QA 1110w" sizes="(min-width:1200px) 1110px, (max-width: 1199px) 80vw, (max-width: 767px) 90vw, (max-width: 575px) 95vw" width="740" height="494" src="/sites/default/files/styles/news_banner_370/public/2024-04/GettyImages-1063724556-crop.jpg?h=81d682ee&amp;itok=8N5uArHf" alt="a woman sits in a computer science classroom"> </div> <span class="field field--name-uid field--type-entity-reference field--label-hidden"><span>Christopher.Sorensen</span></span> <span class="field field--name-created field--type-created field--label-hidden"><time datetime="2024-04-26T15:22:07-04:00" title="Friday, April 26, 2024 - 15:22" class="datetime">Fri, 04/26/2024 - 15:22</time> </span> <div class="clearfix text-formatted field field--name-field-cutline-long field--type-text-long field--label-above"> <div class="field__label">Cutline</div> <div class="field__item"><p><em>(photo by urbazon/Getty Images)</em></p> </div> </div> <div class="field field--name-field-author-reporters field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/authors-reporters/krystle-hewitt" hreflang="en">Krystle Hewitt</a></div> </div> <div class="field field--name-field-topic field--type-entity-reference field--label-above"> <div class="field__label">Topic</div> <div class="field__item"><a href="/news/topics/our-community" hreflang="en">Our Community</a></div> </div> <div class="field field--name-field-story-tags field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/tags/schwartz-reisman-institute-technology-and-society" hreflang="en">Schwartz Reisman Institute for Technology and Society</a></div> <div class="field__item"><a href="/news/tags/artificial-intelligence" hreflang="en">Artificial Intelligence</a></div> <div class="field__item"><a href="/news/tags/computer-science" hreflang="en">Computer Science</a></div> <div class="field__item"><a href="/news/tags/faculty-arts-science" hreflang="en">Faculty of Arts &amp; Science</a></div> </div> <div class="field field--name-field-subheadline field--type-string-long field--label-above"> <div class="field__label">Subheadline</div> <div class="field__item">Total enrolment in courses featuring Embedded Ethics Education Initiative modules exceeded 8,000 students this year</div> </div> <div class="clearfix text-formatted field field--name-body field--type-text-with-summary field--label-hidden field__item"><p>Computer science students at the University of Toronto are learning how to incorporate ethical considerations into the design and development of new technologies such as artificial intelligence with the help of a unique undergraduate initiative.</p> <p>The <a href="https://www.cs.toronto.edu/embedded-ethics/">Embedded Ethics Education Initiative</a> (E3I) aims to provide students with the ability to critically assess the societal impacts of the technologies they will be designing and developing throughout their careers. That includes grappling with issues such as AI safety, data privacy and misinformation.</p> <p>Program co-creator<strong> Sheila McIlraith</strong>, a professor in the department of computer science in the Faculty of Arts &amp; Science and an associate director at the <a href="http://srinstitute.utoronto.ca">Schwartz Reisman Institute for Technology and Society</a>&nbsp;(SRI), says E3I aims to help students 鈥渞ecognize the broader ramifications of the technology they鈥檙e developing on diverse stakeholders, and to avoid or mitigate any negative impact.鈥&nbsp;</p> <p>First launched in 2020 as a two-year pilot program, the initiative is a collaborative venture between the&nbsp;department of computer science and SRI in association with the&nbsp;department of philosophy. It integrates ethics modules into select undergraduate computer science courses 鈥 and has reached thousands of 老司机直播 students in this academic year alone.&nbsp;</p> <p><strong>Malaikah Hussain</strong> is one of the many 老司机直播 students who has benefited from the initiative. As a first-year student enrolled in <a href="https://artsci.calendar.utoronto.ca/course/csc111h1">CSC111: Foundations of Computer Science II</a>, she participated in an E3I module that explored how a data structure she learned about in class laid the foundation of a contact tracing system and raised ethical issues concerning data collection. &nbsp;</p> <p>鈥淭he modules underlined how the software design choices we make extend beyond computing efficiency concerns to grave ethical concerns such as privacy,鈥 says Hussain, who is now a third-year computer science specialist. &nbsp;&nbsp;</p> <p>Hussain adds that the modules propelled her interest in ethics and computing, leading her to pursue upper year courses on the topic. During a subsequent internship, she organized an event about the ethics surrounding e-waste disposal and the company鈥檚 technology life cycle. &nbsp;</p> <p>鈥淭he E3I modules have been crucial in shaping my approach to my studies and work, emphasizing the importance of ethics in every aspect of computing,鈥 she says. &nbsp;</p> <p>The program, which initially reached 400 students, has seen significant growth over the last four years. This academic year alone, total enrolment in&nbsp;computer science&nbsp;courses with E3I programming has exceeded 8,000 students. Another 1,500&nbsp;students participated in E3I programming in courses outside computer science.&nbsp;</p> <figure role="group" class="caption caption-drupal-media align-left"> <div> <div class="field field--name-field-media-image field--type-image field--label-hidden field__item"> <img loading="lazy" src="/sites/default/files/2024-04/techdesign-lead.jpg" width="370" height="270" alt="&quot;&quot;"> </div> </div> <figcaption><em>Clockwise from top left: Steven Coyne, Diane Horton, David Liu and Sheila McIlraith&nbsp;(supplied images)</em></figcaption> </figure> <p>In recognition of the program鈥檚 impact on the undergraduate student learning experience,&nbsp;McIlraith and her colleagues&nbsp;鈥&nbsp;<strong>Diane Horton </strong>and&nbsp;<strong>David Liu</strong>, a professor and associate professor, teaching stream, respectively, in the department of computer science,<strong>&nbsp;</strong>and <strong>Steven Coyne</strong>, an assistant professor who is jointly appointed to the departments of computer science and philosophy<b> </b>鈥&nbsp;were recently recognized with the <a href="https://alumni.utoronto.ca/events-and-programs/awards/awex/northrop-frye-awards">2024 Northrop Frye Award (Team)</a>, one of the prestigious 老司机直播 老司机直播 Association Awards of Excellence. &nbsp;&nbsp;&nbsp;</p> <p>Horton, who leads the initiative鈥檚 assessment efforts, points to the team鈥檚 <a href="https://dl.acm.org/doi/abs/10.1145/3626252.3630834" target="_blank">recently published paper</a> showing that after participating in modules in only one or two courses, students are inspired to learn more about ethics and are benefiting in the workplace. &nbsp;</p> <p>鈥淲e have evidence that they are better able to identify ethical issues arising in their work, and that the modules help them navigate those issues,鈥 she says.&nbsp;</p> <p>Horton adds that the findings build on <a href="https://dl.acm.org/doi/abs/10.1145/3478431.3499407">earlier assessment work</a> showing that after experiencing modules in only one course, students became more interested in ethics and tech, and more confident in their ability to deal with ethical issues they might encounter. &nbsp;</p> <p>The team says the initiative鈥檚 interdisciplinary nature is key to delivering both a curriculum and experience with an authentic voice, giving instructors and students the vocabulary and depth of knowledge to engage on issues such as privacy, well-being and harm. &nbsp;</p> <p>鈥淎s a philosopher and ethicist, I love teaching in a computer science department,鈥 says Coyne. 鈥淢y colleagues teach me about interesting ethical problems that they鈥檝e found in their class material, and I get to reciprocate by finding distinctions and ideas that illuminate those problems. And we learn a lot from each other 鈥 intellectually and pedagogically 鈥 when we design a module for that class together.鈥 &nbsp;&nbsp;</p> <p>E3I is founded upon three key principles: teach students how 鈥 not what 鈥 to think; encourage ethics-informed design choices as a design principle; and make discussions safe, not personal. &nbsp;</p> <p>鈥淓ngaging with students and making them feel safe, not proselytizing, inviting the students to participate is especially important,鈥 says Liu. &nbsp;</p> <p>The modules support this type of learning environment by using stakeholders with fictional character profiles that include names, pictures and a backstory. &nbsp;</p> <p>鈥淔ictional stakeholders help add a layer of distance so students can think through the issues without having to say, 鈥楾his is what I think,鈥欌 Horton says.&nbsp;鈥淪takeholders also increase their awareness of the different kinds of people who might be impacted.鈥 &nbsp;</p> <p>McIlraith adds that having students advocate for an opinion that is not necessarily their own encourages empathy, while Liu notes that many have a 鈥渞eal hunger鈥 to learn about the ethical considerations of their work.&nbsp;</p> <p>鈥淎n increasing number of students are thinking, 鈥業 want to be trained as a computer scientist and I want to use my skills after graduation,鈥 but also 鈥業 want to do something that I think will make a positive impact on the world,鈥欌 he says. &nbsp;&nbsp;</p> <p>Together, the E3I team works with course instructors to develop educational modules that tightly pair ethical concepts with course-specific technical material. In an applied software design course, for example, students learn about accessible software and disability theory; in a theoretical algorithms course, they learn about algorithmic fairness and distributive justice; and in a game design course, they learn about addiction and consent. &nbsp;</p> <p><strong>Steve Engels</strong>, a computer science professor, teaching stream, says integrating an ethics module about addiction into his fourth-year capstone course on video game design felt like a natural extension of his lecture topic on ludology 鈥 in particular, the psychological techniques used to make games compelling 鈥 instead of something that felt artificially inserted into the course. &nbsp;&nbsp;</p> <p>鈥淧roject-based courses can sometimes compel students to focus primarily on the final product of the course, but this module provided an opportunity to pause and reflect on what they were doing and why,鈥 Engels says. 鈥淚t forced them to confront their role in the important and current issue of gaming addiction, so they would be more aware of the ethical implications of their future work and thus be better equipped to handle it.鈥 &nbsp;</p> <p>By next year, each undergraduate computer science student will encounter E3I modules in at least one or two courses every year throughout their program. The team is also exploring the adoption of the E3I model in other STEM disciplines, from ecology to statistics. Beyond 老司机直播, the team plans to share their expertise with other Canadian universities that are interested in developing a similar program.&nbsp;</p> <p>鈥淭his initiative is having a huge impact,鈥 McIlraith says. 鈥淵ou see it in the number of students we鈥檙e reaching and in our assessment results. But it鈥檚 more than that 鈥 we鈥檙e instigating a culture change.鈥</p> </div> <div class="field field--name-field-news-home-page-banner field--type-boolean field--label-above"> <div class="field__label">News home page banner</div> <div class="field__item">Off</div> </div> Fri, 26 Apr 2024 19:22:07 +0000 Christopher.Sorensen 307643 at 老司机直播 researchers develop video camera that captures 'huge range of timescales' /news/u-t-researchers-develop-video-camera-captures-huge-range-timescales <span class="field field--name-title field--type-string field--label-hidden">老司机直播 researchers develop video camera that captures 'huge range of timescales'</span> <div class="field field--name-field-featured-picture field--type-image field--label-hidden field__item"> <img loading="eager" srcset="/sites/default/files/styles/news_banner_370/public/2023-11/20230927_Computational-imaging-researchers_01-crop.jpg?h=afdc3185&amp;itok=R5ZCIQQk 370w, /sites/default/files/styles/news_banner_740/public/2023-11/20230927_Computational-imaging-researchers_01-crop.jpg?h=afdc3185&amp;itok=JGfW1BQd 740w, /sites/default/files/styles/news_banner_1110/public/2023-11/20230927_Computational-imaging-researchers_01-crop.jpg?h=afdc3185&amp;itok=aH-hvKkC 1110w" sizes="(min-width:1200px) 1110px, (max-width: 1199px) 80vw, (max-width: 767px) 90vw, (max-width: 575px) 95vw" width="740" height="494" src="/sites/default/files/styles/news_banner_370/public/2023-11/20230927_Computational-imaging-researchers_01-crop.jpg?h=afdc3185&amp;itok=R5ZCIQQk" alt="&quot;&quot;"> </div> <span class="field field--name-uid field--type-entity-reference field--label-hidden"><span>Christopher.Sorensen</span></span> <span class="field field--name-created field--type-created field--label-hidden"><time datetime="2023-11-13T10:12:10-05:00" title="Monday, November 13, 2023 - 10:12" class="datetime">Mon, 11/13/2023 - 10:12</time> </span> <div class="clearfix text-formatted field field--name-field-cutline-long field--type-text-long field--label-above"> <div class="field__label">Cutline</div> <div class="field__item"><p><em>Researchers<strong> Sotiris Nousias</strong> and <strong>Mian Wei</strong> work on an experimental setup that uses a specialized camera and an imaging technique that timestamps individual particles of light to replay video across large timescales&nbsp;(photo by Matt Hintsa)</em></p> </div> </div> <div class="field field--name-field-author-reporters field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/authors-reporters/krystle-hewitt" hreflang="en">Krystle Hewitt</a></div> </div> <div class="field field--name-field-topic field--type-entity-reference field--label-above"> <div class="field__label">Topic</div> <div class="field__item"><a href="/news/topics/breaking-research" hreflang="en">Breaking Research</a></div> </div> <div class="field field--name-field-story-tags field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/tags/alumni" hreflang="en">老司机直播</a></div> <div class="field__item"><a href="/news/tags/computer-science" hreflang="en">Computer Science</a></div> <div class="field__item"><a href="/news/tags/faculty-applied-science-engineering" hreflang="en">Faculty of Applied Science &amp; Engineering</a></div> <div class="field__item"><a href="/news/tags/faculty-arts-science" hreflang="en">Faculty of Arts &amp; Science</a></div> <div class="field__item"><a href="/news/tags/graduate-students" hreflang="en">Graduate Students</a></div> <div class="field__item"><a href="/news/tags/research-innovation" hreflang="en">Research &amp; Innovation</a></div> </div> <div class="field field--name-field-subheadline field--type-string-long field--label-above"> <div class="field__label">Subheadline</div> <div class="field__item">鈥淥ur work introduces a unique camera capable of capturing videos that can be replayed at speeds ranging from the standard 30 frames per second to hundreds of billions of frames per second鈥</div> </div> <div class="clearfix text-formatted field field--name-body field--type-text-with-summary field--label-hidden field__item"><p>Computational imaging researchers at the University of Toronto have built a camera that can capture everything from light bouncing off a mirror to a ball bouncing on a basketball court 鈥 all in a single take.</p> <p>Dubbed by one researcher as a 鈥渕icroscope for time,鈥 <a href="http://he imaging technique">the imaging technique</a> could lead to improvements in everything from medical imaging to the LIDAR (Light Detection and Ranging) technologies used in mobile phones and self-driving cars.</p> <p>鈥淥ur work introduces a unique camera capable of capturing videos that can be replayed at speeds ranging from the standard 30 frames per second to hundreds of billions of frames per second,鈥 says&nbsp;<strong>Sotiris Nousias</strong>, a post-doctoral researcher who is working with <strong>Kyros Kutulakos</strong>, a professor of computer science in the Faculty of Arts &amp; Science.&nbsp;</p> <p>鈥淲ith this technology, you no longer need to predetermine the speed at which you want to capture the world.鈥</p> <p>The research by members of the Toronto Computational Imaging Group&nbsp;鈥 including computer science PhD student&nbsp;<strong>Mian Wei</strong>, electrical and computer engineering alumnus <strong>Rahul Gulve </strong>and&nbsp;<strong>David Lindell</strong>, an assistant professor of computer science&nbsp;鈥&nbsp;was recently presented at the&nbsp;2023 International Conference on Computer Vision, where it received one of two&nbsp;best paper awards.</p> <p>鈥淥ur camera is fast enough to even let us see light moving through a scene,鈥 Wei says. 鈥淭his type of slow and fast imaging where we can capture video across such a huge range of timescales has never been done before.鈥</p> <p>Wei compares the approach to combining the various video modes on a smartphone: slow motion, normal video and time lapse.</p> <p>鈥淚n our case, our camera has just one recording mode that records all timescales simultaneously and then, afterwards, we can decide [what we want to view],鈥 he says. 鈥淲e can see every single timescale because if something鈥檚 moving too fast, we can zoom in to that timescale, if something鈥檚 moving too slow, we can zoom out and see that, too.鈥</p> <figure role="group" class="caption caption-drupal-media align-center"> <div> <div class="field field--name-field-media-image field--type-image field--label-hidden field__item"> <img loading="lazy" src="/sites/default/files/styles/scale_image_750_width_/public/2023-11/group-photo.jpg?itok=2nAmr_Hq" width="750" height="500" alt="&quot;&quot;" class="image-style-scale-image-750-width-"> </div> </div> <figcaption><em>Postdoctoral researcher&nbsp;<strong>Sotiris Nousias,&nbsp;</strong>PhD student <strong>Mian Wei,&nbsp;</strong>Assistant Professor <strong>David Lindell </strong>and<br> Professor <strong>Kyros Kutulakos </strong>(photos supplied)</em></figcaption> </figure> <p>While conventional high-speed cameras can record video up to around one million frames per second without a dedicated light source 鈥 fast enough to capture videos of a speeding bullet 鈥 they are too slow to capture the movement of light.</p> <p>The researchers say capturing an&nbsp;image much faster than a speeding bullet without a synchronized light source such as strobe light or a laser creates a challenge because very little light is collected during such a short exposure period&nbsp;鈥 and a significant amount of light is needed to form an image.</p> <p>To overcome these issues, the research team used a special type of ultra-sensitive sensor called a free-running single-photon avalanche diode (SPAD). The sensor operates by time-stamping the arrival of individual photons (particles of light) with precision down to trillionths of a second. To recover a video, they use a computational algorithm that analyzes when the photons arrive and estimates how much light is incident on the sensor for any given instant in time, regardless of whether that light came from room lights, sunlight or from lasers operating nearby.</p> <p>Reconstructing and playing back a video is a matter of retrieving the light levels corresponding to each video frame.</p> <figure role="group" class="caption caption-drupal-media align-center"> <div> <div class="field field--name-field-media-image field--type-image field--label-hidden field__item"> <img loading="lazy" src="/sites/default/files/styles/scale_image_750_width_/public/2023-11/photons-figure.jpg?itok=L74Lqvv4" width="750" height="424" alt="&quot;&quot;" class="image-style-scale-image-750-width-"> </div> </div> <figcaption><em>The researchers 鈥減assive ultra-wideband imaging鈥 approach&nbsp; uses a set of timestamps to detect the arrival of individual photons.</em></figcaption> </figure> <p>The researchers refer to the novel approach as 鈥減assive ultra-wideband imaging鈥 that enables post-capture refocusing in time 鈥 from transient to everyday timescales.</p> <p>鈥淵ou don鈥檛 need to know what happens in the scene, or what light sources are there. You can record information and you can refocus on whatever phenomena or whatever timescale you want,鈥 Nousias explains.</p> <p>Using an experimental setup that employed multiple external light sources and a spinning fan, the team demonstrated their method鈥檚 ability to allow for post-capture timescale selection. In their demonstration, they used photon timestamp data captured by a free-running SPAD camera to play back video of a rapidly spinning fan at both 1,000 frames per second and 250 billion frames per second.</p> <p><iframe allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen frameborder="0" height="422" loading="lazy" src="https://www.youtube.com/embed/StVUUAL7CxI?si=Eeeue483SWmnvKpD" title="YouTube video player" width="100%"></iframe></p> <p>The technology could have myriad applications.</p> <p>鈥淚n biomedical imaging, you might want to be able to image across a huge range of timescales at which biological phenomena occur. For example, protein folding and binding happen across timescales from nanoseconds to milliseconds,鈥 says Lindell. 鈥淚n other applications, like mechanical inspection, maybe you鈥檇 like to image an engine or a turbine for many minutes or hours and then after collecting the data, zoom in to a timescale where an unexpected anomaly or failure occurs.鈥</p> <p>In the case of self-driving cars, each vehicle may use an active imaging system like LIDAR to emit light pulses that can create potential interference with other systems on the road. However, the researchers say their technology could 鈥渢urn this problem on its head鈥 by capturing and using ambient photons. For example, they say it might be possible to create universal light sources that any car, robot or smartphone can use without requiring the explicit synchronization that is needed by today鈥檚 LIDAR systems.</p> <p>Astronomy is another area that could see potential imaging advancements&nbsp;鈥 including when it comes to studying phenomena such as fast radio bursts.</p> <p>鈥淐urrently, there is a strong focus on pinpointing the optical counterparts of these fast radio bursts more precisely in their host galaxies. This is where the techniques developed by this group, particularly their innovative use of SPAD cameras, can be valuable,鈥 says&nbsp;<strong>Suresh Sivanandam</strong>, interim director of the Dunlap Institute for Astronomy &amp; Astrophysics and associate professor at the David A. Dunlap department of astronomy and astrophysics.</p> <p>The researchers say that while the sensors that have the capability to timestamp photons already exist 鈥 it鈥檚 an emerging technology that鈥檚 been deployed on iPhones in their LIDAR and their proximity sensor&nbsp;鈥 no one has used the photon timestamps in this way to enable this type of ultra-wideband, single-photon imaging.</p> <p>鈥淲hat we provide is a microscope for time,鈥&nbsp;Kutulakos says. 鈥淪o, with the camera you record everything that happened and then you can go in and observe the world at imperceptibly fast timescales.</p> <p>鈥淪uch capability can open up a new understanding of nature and the world around us."</p> <p>&nbsp;</p> </div> <div class="field field--name-field-news-home-page-banner field--type-boolean field--label-above"> <div class="field__label">News home page banner</div> <div class="field__item">Off</div> </div> Mon, 13 Nov 2023 15:12:10 +0000 Christopher.Sorensen 304345 at Researchers find similarities in the way both children and societies alter words' meanings /news/researchers-find-similarities-way-both-children-and-societies-alter-words-meanings <span class="field field--name-title field--type-string field--label-hidden">Researchers find similarities in the way both children and societies alter words' meanings</span> <div class="field field--name-field-featured-picture field--type-image field--label-hidden field__item"> <img loading="eager" srcset="/sites/default/files/styles/news_banner_370/public/2023-08/GettyImages-1209896821-crop.jpg?h=afdc3185&amp;itok=G9nVSN0k 370w, /sites/default/files/styles/news_banner_740/public/2023-08/GettyImages-1209896821-crop.jpg?h=afdc3185&amp;itok=U6qWiFFM 740w, /sites/default/files/styles/news_banner_1110/public/2023-08/GettyImages-1209896821-crop.jpg?h=afdc3185&amp;itok=neuK_flc 1110w" sizes="(min-width:1200px) 1110px, (max-width: 1199px) 80vw, (max-width: 767px) 90vw, (max-width: 575px) 95vw" width="740" height="494" src="/sites/default/files/styles/news_banner_370/public/2023-08/GettyImages-1209896821-crop.jpg?h=afdc3185&amp;itok=G9nVSN0k" alt="a young boy speaks to his mother"> </div> <span class="field field--name-uid field--type-entity-reference field--label-hidden"><span>Christopher.Sorensen</span></span> <span class="field field--name-created field--type-created field--label-hidden"><time datetime="2023-08-22T09:22:31-04:00" title="Tuesday, August 22, 2023 - 09:22" class="datetime">Tue, 08/22/2023 - 09:22</time> </span> <div class="clearfix text-formatted field field--name-field-cutline-long field--type-text-long field--label-above"> <div class="field__label">Cutline</div> <div class="field__item"><p><em>(photo by Steve Debenport/Getty Images)</em></p> </div> </div> <div class="field field--name-field-author-reporters field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/authors-reporters/krystle-hewitt" hreflang="en">Krystle Hewitt</a></div> </div> <div class="field field--name-field-topic field--type-entity-reference field--label-above"> <div class="field__label">Topic</div> <div class="field__item"><a href="/news/topics/breaking-research" hreflang="en">Breaking Research</a></div> </div> <div class="field field--name-field-story-tags field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/tags/computer-science" hreflang="en">Computer Science</a></div> <div class="field__item"><a href="/news/tags/faculty-arts-science" hreflang="en">Faculty of Arts &amp; Science</a></div> <div class="field__item"><a href="/news/tags/research-innovation" hreflang="en">Research &amp; Innovation</a></div> </div> <div class="field field--name-field-subheadline field--type-string-long field--label-above"> <div class="field__label">Subheadline</div> <div class="field__item">"Our hypothesis is that these processes are fundamentally the same"</div> </div> <div class="clearfix text-formatted field field--name-body field--type-text-with-summary field--label-hidden field__item"><p>An international team of researchers is using computer science to explore the knowledge foundation of word meaning in both child language development and the evolution of word meanings across languages.</p> <p>Through a computational framework they developed, the researchers show how patterns of children鈥檚 language innovation can be used to predict patterns of language evolution, and vice versa.</p> <p>The interdisciplinary work by University of Toronto computer science researcher <strong>Yang Xu</strong>&nbsp;and computational linguistics and cognitive science researchers from&nbsp;Universitat Pompeu Fabra&nbsp;and&nbsp;ICREA&nbsp;in Spain, was <a href="https://www.science.org/doi/10.1126/science.ade7981">recently published in the journal <em>Science</em></a>.</p> <figure role="group" class="caption caption-drupal-media align-left"> <div> <div class="field field--name-field-media-image field--type-image field--label-hidden field__item"> <img loading="lazy" src="/sites/default/files/2023-08/yang%2B400%2Bx%2B600.jpg" width="300" height="450" alt="&quot;&quot;"> </div> </div> <figcaption><em>Associate Professor Yang Xu (photo by Matt Hintsa)</em></figcaption> </figure> <p>In the paper, the team investigates what is known as word meaning extension,&nbsp;which is the creative use of known words to express novel meanings.</p> <p>The research aimed to look at how this type of human lexical creativity observed in both children and adult language users can be understood in a unified framework, says Xu, a senior author on the paper and an associate professor in the&nbsp;department of computer science&nbsp;in the Faculty of Arts &amp; Science and the&nbsp;cognitive science program at University College.</p> <p>鈥淎 common strategy of human lexical creativity is to use words we know to express something new, so that we can save the effort of creating new words. Our paper offers a unified view of the various processes of word meaning extension observed at different timescales, across populations and within individuals.鈥</p> <p>Word meaning extension is often observed in the historical change or evolution of language, Xu adds. For example, the word 鈥渕ouse鈥 in English originally meant a type of rodent, but now also refers to a computer device.</p> <p>On the other hand, word meaning extension is also observed in children as early as two years of age. For example, children sometimes use the word 鈥渂all鈥 to refer to 鈥渂alloon,鈥 presumably because they haven鈥檛 yet acquired the right word to describe the latter object, so they overextend a known word to express what they want to say.&nbsp;</p> <p>鈥淚n this study, we ask whether processes of word meaning extension at two very different timescales, in language evolution, which takes place over hundreds or thousands of years, and in child鈥檚 language development, which typically occurs in the order of months or years, have something in common with each other,鈥 Xu says. 鈥淥ur hypothesis is that these processes are fundamentally the same.</p> <p>鈥淲e find, indeed, there鈥檚 a unified foundation underlying these processes. There is a shared repertoire of knowledge types that underlies word meaning extension in both language evolution and language development.鈥</p> <p>To test their hypothesis and figure out what is in common between products of language learning and language evolution, the team built a computational model that takes pairs of meanings or concepts as input, such as 鈥渂all鈥 versus 鈥渂alloon,鈥 鈥渄oor鈥 versus 鈥渒ey鈥 or 鈥渇ire鈥 versus 鈥渇lame,鈥 and makes a prediction about how likely these concepts can be co-expressed under the same word.</p> <p>In building their model, the researchers constructed a knowledge base that helps identify similarity relations between concepts as they 鈥渂elieve it is the key that makes people relate meanings in word meaning extension,鈥 Xu says.</p> <p>The knowledge base consists of four primary knowledge types grounded in human experience: visual perception, associative knowledge, taxonomic knowledge and affective knowledge. Pairs of concepts score high if they are measured to be similar in one or some of these knowledge types.</p> <p>鈥淭he pair of concepts like 鈥榖all鈥 and 鈥榖alloon鈥 would score high due to their visual similarity, whereas 鈥榢ey鈥 and 鈥榙oor鈥 would score high because they are thematically related or often co-occur together in daily scenarios,鈥 Xu explains. 鈥淥n the contrary, for a pair of concepts such as 鈥榳ater鈥 and 鈥榩encil,鈥 they would have little similarity measured in any of the four knowledge types, so that pair would receive a low score. As a result, the model would predict they can鈥檛, or they are unlikely to, extend to each other.鈥&nbsp;</p> <figure role="group" class="caption caption-drupal-media align-center"> <div> <div class="field field--name-field-media-image field--type-image field--label-hidden field__item"> <img loading="lazy" src="/sites/default/files/styles/scale_image_750_width_/public/2023-08/fig1%20%28002%29%20-%20lexical%20creativity%20and%20framework.jpg?itok=mcf6fHeF" width="750" height="971" alt="&quot;&quot;" class="image-style-scale-image-750-width-"> </div> </div> <figcaption>Figure A: Researchers demonstrate examples of child overextension that are also found in language evolution.<br> Figure B: The team developed a computational framework for investigating the possibility of a common foundation in lexical creativity.&nbsp; (images by Yang Xu)</figcaption> </figure> <p>Xu notes the team found all four knowledge types contributed to word meaning extension and a model that incorporates these types tends to better predict data than alternative models that rely on fewer or individual knowledge types.</p> <p>鈥淭his finding tells us that word meaning extension relies on multifaceted and grounded knowledge based on people鈥檚 perceptual, affective and common-sense knowledge,鈥 he says.</p> <p>Built exclusively from children鈥檚 word meaning extension data, the model can successfully predict word meaning extension patterns from language evolution and can also make predictions in the reverse direction on children鈥檚 overextension when trained on language evolution data.</p> <p>鈥淭his cross-predictive analysis suggests that there are shared knowledge types between children鈥檚 word meaning extension and the products of language evolution, despite the fact that they occur at very different timescales. These processes both rely on a common core knowledge foundation 鈥 together these findings help us understand word meaning extension in a unified way,鈥 Xu says.</p> <p>Xu stresses that existing research on child overextension has been typically discussed in the context of developmental psychology whereas word meaning extension in history is typically discussed in historical and computational linguistics, so this project aims to build a tighter connection between the two fields of research.</p> <p>The researchers hope that additional computational modelling will shed light on other potential lines of inquiry, including the basic mechanisms at play in the historical evolution of word meanings and the emergence of word meanings in child development, as well as the origins of different semantic knowledge types and how they are acquired.</p> </div> <div class="field field--name-field-news-home-page-banner field--type-boolean field--label-above"> <div class="field__label">News home page banner</div> <div class="field__item">Off</div> </div> Tue, 22 Aug 2023 13:22:31 +0000 Christopher.Sorensen 302704 at Researchers develop interactive 鈥楽targazer鈥 camera robot that can help film tutorial videos /news/researchers-develop-interactive-stargazer-camera-robot-can-help-film-tutorial-videos <span class="field field--name-title field--type-string field--label-hidden">Researchers develop interactive 鈥楽targazer鈥 camera robot that can help film tutorial videos</span> <div class="field field--name-field-featured-picture field--type-image field--label-hidden field__item"> <img loading="eager" srcset="/sites/default/files/styles/news_banner_370/public/2023-05/20230515_Stargazer-Robot_Jiannan-Li_02_edit-crop.jpg?h=afdc3185&amp;itok=ytPWrguY 370w, /sites/default/files/styles/news_banner_740/public/2023-05/20230515_Stargazer-Robot_Jiannan-Li_02_edit-crop.jpg?h=afdc3185&amp;itok=D3scWMZG 740w, /sites/default/files/styles/news_banner_1110/public/2023-05/20230515_Stargazer-Robot_Jiannan-Li_02_edit-crop.jpg?h=afdc3185&amp;itok=KgHRDWb- 1110w" sizes="(min-width:1200px) 1110px, (max-width: 1199px) 80vw, (max-width: 767px) 90vw, (max-width: 575px) 95vw" width="740" height="494" src="/sites/default/files/styles/news_banner_370/public/2023-05/20230515_Stargazer-Robot_Jiannan-Li_02_edit-crop.jpg?h=afdc3185&amp;itok=ytPWrguY" alt="&quot;&quot;"> </div> <span class="field field--name-uid field--type-entity-reference field--label-hidden"><span>siddiq22</span></span> <span class="field field--name-created field--type-created field--label-hidden"><time datetime="2023-05-19T15:02:56-04:00" title="Friday, May 19, 2023 - 15:02" class="datetime">Fri, 05/19/2023 - 15:02</time> </span> <div class="clearfix text-formatted field field--name-field-cutline-long field--type-text-long field--label-above"> <div class="field__label">Cutline</div> <div class="field__item"><p>Research led by 老司机直播 computer science PhD candidate Jiannan Li explores how an interactive camera robot can assist instructors and others in making how-to videos (photo by Matt Hintsa)</p> </div> </div> <div class="field field--name-field-topic field--type-entity-reference field--label-above"> <div class="field__label">Topic</div> <div class="field__item"><a href="/news/topics/our-community" hreflang="en">Our Community</a></div> </div> <div class="field field--name-field-story-tags field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/tags/computer-science" hreflang="en">Computer Science</a></div> <div class="field__item"><a href="/news/tags/faculty-arts-science" hreflang="en">Faculty of Arts &amp; Science</a></div> <div class="field__item"><a href="/news/tags/faculty-information" hreflang="en">Faculty of Information</a></div> <div class="field__item"><a href="/news/tags/machine-learning" hreflang="en">machine learning</a></div> <div class="field__item"><a href="/news/tags/research-innovation" hreflang="en">Research &amp; Innovation</a></div> </div> <div class="clearfix text-formatted field field--name-body field--type-text-with-summary field--label-hidden field__item"><p>A group of&nbsp;computer scientists from the University of Toronto&nbsp;wants to make it easier to film&nbsp;how-to videos.&nbsp;</p> <p>The team of researchers&nbsp;<a href="http://www.dgp.toronto.edu/~jiannanli/stargazer/stargazer.html">have developed Stargazer</a>, an interactive camera robot that helps university instructors and other content creators create engaging tutorial videos demonstrating physical skills.</p> <p>For those&nbsp;without access to a cameraperson, Stargazer can capture dynamic instructional videos and address the constraints of working with static cameras.</p> <p>鈥淭he robot is there to help humans, but not to replace humans,鈥 explains lead researcher&nbsp;<a href="https://www.dgp.toronto.edu/~jiannanli" target="_blank"><strong>Jiannan Li</strong></a>, a PhD candidate in 老司机直播's department of computer science in the Faculty of Arts &amp; Science.</p> <p>鈥淭he instructors are here to teach. The robot鈥檚 role is to help with filming 鈥&nbsp;the heavy-lifting work.鈥</p> <p>The Stargazer work is outlined in a&nbsp;<a href="https://dl.acm.org/doi/abs/10.1145/3544548.3580896">published paper</a>&nbsp;presented this year at the Association for Computing Machinery Conference on Human Factors in Computing Systems, a leading international conference in human-computer interaction.</p> <p>Li鈥檚 co-authors include fellow members of 老司机直播's&nbsp;<a href="https://www.dgp.toronto.edu/">Dynamic Graphics Project</a>&nbsp;(dgp) lab: postdoctoral researcher&nbsp;<a href="https://mauriciosousa.github.io/" target="_blank"><strong>Mauricio Sousa</strong></a>, PhD students&nbsp;<a href="https://karthikmahadevan.ca/" target="_blank"><strong>Karthik Mahadevan</strong></a>&nbsp;and&nbsp;<a href="https://www.dgp.toronto.edu/~bryanw/" target="_blank"><strong>Bryan Wang</strong></a>, Professor&nbsp;<a href="https://www.dgp.toronto.edu/~ravin/" target="_blank"><strong>Ravin Balakrishnan</strong></a>&nbsp;and Associate Professor&nbsp;<a href="https://www.tovigrossman.com/" target="_blank"><strong>Tovi Grossman</strong></a>; as well as Associate Professor&nbsp;<a href="https://ischool.utoronto.ca/profile/tony-tang/" target="_blank"><strong>Anthony Tang</strong></a>&nbsp;(cross-appointed with the Faculty of Information);&nbsp;recent 老司机直播 Faculty of Information graduates&nbsp;<strong>Paula Akemi Aoyaui</strong>&nbsp;and&nbsp;<strong>Nicole Yu</strong>; and third-year computer engineering student Angela Yang.</p> <figure role="group" class="caption caption-drupal-media align-center"> <div> <div class="field field--name-field-media-image field--type-image field--label-hidden field__item"> <img loading="lazy" src="/sites/default/files/2023-05/Fig14_3x2.jpg" width="1500" height="1000" alt="&quot;&quot;"> </div> </div> <figcaption><em>A study participant uses the interactive camera robot Stargazer to record a how-to video on skateboard maintenance&nbsp;(supplied photo)</em></figcaption> </figure> <p>Stargazer uses a single camera on a robot arm, with seven independent motors that can move along with the video subject by autonomously tracking regions of interest. The system鈥檚 camera behaviours can be adjusted based on subtle cues from instructors, such as body movements, gestures and speech that are detected by the prototype鈥檚 sensors.</p> <p>The instructor鈥檚 voice&nbsp;is recorded with a wireless microphone and sent to Microsoft Azure Speech-to-Text, a speech-recognition software.&nbsp;The transcribed text, along with a custom prompt, is then sent to the GPT-3 program,&nbsp;a large language model&nbsp;which labels the instructor鈥檚 intention for the camera&nbsp;鈥&nbsp;such as a standard versus&nbsp;high angle and normal versus&nbsp;tighter framing.</p> <p>These camera control commands are cues naturally used by instructors to guide the attention of their audience and are not disruptive to instruction delivery, the researchers say.</p> <p>For example, the instructor can have Stargazer adjust its view to look at each of the tools they will be using during a tutorial by pointing to each one, prompting the camera to pan around. The instructor can also say to viewers, "If you look at how I put 鈥楢鈥 into 鈥楤鈥 from the top,鈥 Stargazer will respond by framing the&nbsp;action with a high angle to give the audience a better view.</p> <p>In designing the interaction vocabulary, the team wanted to identify signals that are subtle and avoid the need for the instructor to communicate separately to the robot while speaking to their students or audience.</p> <p>鈥淭he goal is to have the robot understand in real time what kind of shot the instructor wants," Li says.&nbsp;"The important part of this goal is that we want these vocabularies to be non-disruptive. It should feel like they fit into the tutorial."</p> <p>Stargazer鈥檚 abilities were put to the test in a study involving six instructors, each teaching a distinct skill to create dynamic tutorial videos.</p> <p>Using the robot,&nbsp;they were able to produce videos demonstrating physical tasks on a diverse range of subjects, from skateboard maintenance to interactive sculpture-making and&nbsp;setting up virtual-reality headsets, while relying on the robot for subject tracking, camera framing and camera angle combinations.</p> <p>The participants were each given a practice session and completed their tutorials within two takes. The researchers reported all of the participants were able to create videos without needing any additional controls than what was provided by the robotic camera and were satisfied with the quality of the videos produced.</p> <p>While Stargazer鈥檚 range of camera positions is sufficient for tabletop activities, the team is interested in exploring the potential of camera drones and robots on wheels to help with filming tasks in larger environments from a wider variety of angles.</p> <p>They also found some study participants attempted to trigger object shots by giving or showing objects to the camera, which were not among the cues that Stargazer currently recognizes. Future research could investigate methods to detect diverse and subtle intents by combining simultaneous signals from an instructor鈥檚 gaze, posture and speech, which Li says is a long-term goal the team is making progress on.</p> <p><iframe allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen frameborder="0" height="422" src="https://www.youtube.com/embed/fQ9JeptOgZ0" title="YouTube video player" width="750"></iframe></p> <p>While the team presents Stargazer as an option for those who do not have access to professional film crews, the researchers admit the robotic camera prototype relies on an expensive robot arm and a suite of external sensors. Li notes, however, that the&nbsp;Stargazer concept is not necessarily limited by costly technology.</p> <p>鈥淚 think there鈥檚 a real market for robotic filming equipment, even at the consumer level. Stargazer is expanding that realm,&nbsp;but looking farther ahead with a bit more autonomy and a little bit more interaction. So&nbsp;realistically, it could&nbsp;be available to consumers,鈥 he says.</p> <p>Li says the team is excited by the possibilities Stargazer presents for greater human-robot collaboration.</p> <p>鈥淔or robots to work together with humans, the key is for robots to understand humans better. Here, we are looking at these vocabularies, these typically human communication behaviours,鈥 he&nbsp;explains.</p> <p>鈥淲e hope to inspire others to look at understanding how humans communicate ... and how robots can pick that up and have the proper reaction, like assistive behaviours."</p> </div> <div class="field field--name-field-news-home-page-banner field--type-boolean field--label-above"> <div class="field__label">News home page banner</div> <div class="field__item">Off</div> </div> <div class="field field--name-field-add-new-author-reporter field--type-entity-reference field--label-above"> <div class="field__label">Add new author/reporter</div> <div class="field__items"> <div class="field__item"><a href="/news/authors-reporters/krystle-hewitt" hreflang="en">Krystle Hewitt</a></div> </div> </div> Fri, 19 May 2023 19:02:56 +0000 siddiq22 301759 at