{"id":3838,"date":"2025-12-16T10:26:30","date_gmt":"2025-12-16T01:26:30","guid":{"rendered":"https:\/\/baiforum.jp\/en\/?p=3838"},"modified":"2025-12-16T10:26:30","modified_gmt":"2025-12-16T01:26:30","slug":"en098","status":"publish","type":"post","link":"https:\/\/baiforum.jp\/en\/events\/en098\/","title":{"rendered":"The 5th BAIRAL Research Meeting for Fiscal Year 2025 <br> \u201cHow We Judge Actions Toward Robots: Testing the Ethical Asymmetry Hypothesis\u201d"},"content":{"rendered":"<p><strong>\u25c7<\/strong><strong>BAIRAL<\/strong><strong>\uff08<\/strong><strong>B\u2019AI RA League<\/strong><strong>\uff09<br \/>\n<\/strong>BAIRAL is a study group by young research assistants (RA) of the B\u2019AI Global Forum of the Institute for AI and Beyond at the University of Tokyo. Aiming to achieve gender equality and a guarantee of rights for minorities in the AI era, this study group examines relationships between digital information technology and society. BAIRAL organizes research meetings every other month with guest speakers in a variety of fields.<\/p>\n<p><strong>\u25c7Date &amp; Venue<br \/>\n<\/strong>\u30fbDate:\u00a0 January 8, 2026 (Thursday) 13:00-14:30 JST<br aria-hidden=\"true\" \/>\u30fbLanguage: English<br aria-hidden=\"true\" \/>\u30fbFormat: Online &#8211;\u00a0 Zoom meeting (No registration required)<br aria-hidden=\"true\" \/><a class=\"c-link c-link--underline\" href=\"https:\/\/u-tokyo-ac-jp.zoom.us\/j\/86444691494?pwd=NPY81piLgdjd63F6fLT37omHwVMXYE.1\" target=\"_blank\" rel=\"noopener\" data-stringify-link=\"https:\/\/u-tokyo-ac-jp.zoom.us\/j\/86444691494?pwd=NPY81piLgdjd63F6fLT37omHwVMXYE.1\" data-sk=\"tooltip_parent\">https:\/\/u-tokyo-ac-jp.zoom.us\/j\/86444691494?pwd=NPY81piLgdjd63F6fLT37omHwVMXYE.1<\/a><br aria-hidden=\"true\" \/>Meeting ID: 864 4469 1494\/Passcode: 904391<\/p>\n<p><strong>\u25c7Guest Speaker<\/strong><br aria-hidden=\"true\" \/>Minyi Wang (PhD student, Human Interface Technology Lab, University of Canterbury)<\/p>\n<p><strong>\u25c7Abstract<\/strong><br aria-hidden=\"true\" \/>The Ethical Asymmetry Hypothesis in human\u2013robot interaction proposes that observers judge harmful actions toward robots as indicative of vice, whereas benevolent actions do not proportionally elevate attributions of virtue. Empirical evidence for this asymmetry remains limited, particularly within a virtue-ethical framework. This study provides an initial assessment by examining how graded moral behaviours toward artificial agents influence attributions of the four cardinal virtues.<\/p>\n<p>Forty text-based scenarios were constructed through three rounds of pretesting to represent ten calibrated levels of moral valence for each virtue: prudence, justice, temperance, and courage. A sample of 146 native-English-speaking adults was recruited via Prolific. Participants evaluated each scenario\u2019s moral valence and attributed virtue using the QCV scale on a 1\u201310 range. For each virtue, ten observations were plotted in a two-dimensional morality\u2013virtue space and modelled through polynomial curve fitting to characterise the functional mapping between moral behaviour and perceived virtue.<\/p>\n<p>Across all four virtues, the fitted polynomial functions exhibited similar shapes and showed central symmetry, indicating no detectable asymmetry between responses to morally negative and morally positive behaviours.<br aria-hidden=\"true\" \/><br aria-hidden=\"true\" \/><strong>\u00a0\u25c7Organizer<\/strong><br aria-hidden=\"true\" \/>B\u2019AI Global Forum, Institute for AI and Beyond at the University of Tokyo<\/p>\n<p><strong>\u25c7Inquiry<\/strong><br aria-hidden=\"true\" \/>MAO YUNFAN (Research assistant of the B\u2019AI Global Forum)<br aria-hidden=\"true\" \/>maoyunfan0254[at]<a class=\"c-link c-link--underline\" href=\"http:\/\/g.ecc.u-tokyo.ac.jp\/\" target=\"_blank\" rel=\"noopener noreferrer\" data-stringify-link=\"http:\/\/g.ecc.u-tokyo.ac.jp\" data-sk=\"tooltip_parent\">g.ecc.u-tokyo.ac.jp<\/a>\u00a0(Please change [at] to @)<\/p>\n","protected":false},"excerpt":{"rendered":"<p>\u25c7BAIRAL\uff08B\u2019AI RA League\uff09 BAIRAL is a study group by<\/p>\n<div class=\"continue-reading-wrapper\"><a href=\"https:\/\/baiforum.jp\/en\/events\/en098\/\" class=\"continue-reading\">Continue Reading<i class=\"ion-ios-arrow-right\"><\/i><\/a><\/div>\n","protected":false},"author":7,"featured_media":0,"parent":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[2],"tags":[62,383,385,384],"class_list":["post-3838","post","type-post","status-publish","format-standard","hentry","category-events","tag-bairal","tag-ethicalasymmetry","tag-robotethics","tag-virtueethics"],"acf":[],"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/baiforum.jp\/en\/wp-json\/wp\/v2\/posts\/3838","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/baiforum.jp\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/baiforum.jp\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/baiforum.jp\/en\/wp-json\/wp\/v2\/users\/7"}],"replies":[{"embeddable":true,"href":"https:\/\/baiforum.jp\/en\/wp-json\/wp\/v2\/comments?post=3838"}],"version-history":[{"count":2,"href":"https:\/\/baiforum.jp\/en\/wp-json\/wp\/v2\/posts\/3838\/revisions"}],"predecessor-version":[{"id":3840,"href":"https:\/\/baiforum.jp\/en\/wp-json\/wp\/v2\/posts\/3838\/revisions\/3840"}],"wp:attachment":[{"href":"https:\/\/baiforum.jp\/en\/wp-json\/wp\/v2\/media?parent=3838"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/baiforum.jp\/en\/wp-json\/wp\/v2\/categories?post=3838"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/baiforum.jp\/en\/wp-json\/wp\/v2\/tags?post=3838"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}