{"id":3739,"date":"2025-08-07T17:08:11","date_gmt":"2025-08-07T08:08:11","guid":{"rendered":"https:\/\/baiforum.jp\/en\/?p=3739"},"modified":"2025-08-14T12:47:44","modified_gmt":"2025-08-14T03:47:44","slug":"en094","status":"publish","type":"post","link":"https:\/\/baiforum.jp\/en\/events\/en094\/","title":{"rendered":"The 3rd BAIRAL Research Meeting for Fiscal Year 2025 <br> \u201cReconsidering the Origin of AI in \u2018Ethics by Design\u2019\u201d"},"content":{"rendered":"<p><strong>\u25c7<\/strong><strong>BAIRAL<\/strong><strong>\uff08<\/strong><strong>B\u2019AI RA League<\/strong><strong>\uff09<br \/>\n<\/strong>BAIRAL is a study group by young research assistants (RA) of the B\u2019AI Global Forum of the Institute for AI and Beyond at the University of Tokyo. Aiming to achieve gender equality and a guarantee of rights for minorities in the AI era, this study group examines relationships between digital information technology and society. BAIRAL organizes research meetings every other month with guest speakers in a variety of fields.<\/p>\n<p><strong>\u25c7Date &amp; Venue<br \/>\n<\/strong><span style=\"font-weight: 400;\">\u30fbDate: Tuesday, September 2, 2025, 5:00\u20136:30 pm (Japan Time) \/ 9:00-10:30 am (Nigeria Time)<br \/>\n<\/span><span style=\"font-weight: 400;\">\u30fbVenue: Online via Zoom\uff08No registration required\uff09<br \/>\n<\/span><a href=\"https:\/\/u-tokyo-ac-jp.zoom.us\/j\/4062004262?omn=87365613579\"><span style=\"font-weight: 400;\">https:\/\/u-tokyo-ac-jp.zoom.us\/j\/4062004262?omn=87365613579<\/span><\/a><span style=\"font-weight: 400;\"><br \/>\n<\/span><span style=\"font-weight: 400;\">Meeting ID: 406 200 4262<br \/>\n<\/span><span style=\"font-weight: 400;\">\u30fbLanguage: English<\/span><\/p>\n<p><strong>\u25c7Guest Speaker<br \/>\n<\/strong><span style=\"font-weight: 400;\">Dr. H. Titilola OLOJEDE, Assistant Professor and Head of the Department of Philosophy at the National Open University of Nigeria<\/span><\/p>\n<p><strong>\u25c7Abstract<br \/>\n<\/strong><span style=\"font-weight: 400;\">Since the launch of ChatGPT, a form of generative artificial intelligence (GenAI) in November of 2022, artificial intelligence (AI) has become topical with its influence permeating many sectors, including education, healthcare, industry, and Agriculture. There is hardly any endeavour where its impact is not felt. Based on this, there are over 300 proposals from academics, inter-governmental entities, and private companies on the ethics of AI, that is, how its benefits might be harnessed for the common good. Among these numerous proposals, hardly would one see a proposal that considers AI\u2019s historical roots and values from non-Western societies, as most, if not all, are Western-centric in line with the popular \u2018origin\u2019 of AI from the 1956 Dartmouth conference.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">However, are there ideas that are precursors to AI? How might these ideas impact the \u2018ethics by design\u2019 of AI? The speaker presents thoughts on decolonising AI. She discusses the early history of AI from Asian and African perspectives. Drawing on perennial issues of bias, stereotypes and exclusion, and using insights from the Ubuntu philosophy, she examines the intersection of gender with technology and other historically marginalised groups and regions. The speaker posits that an adequate AI ethics cannot but consider indigenous perspectives, vulnerable populations and the Global South.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">\u00a0<strong>\u25c7Moderator<br \/>\n<\/strong>Ama\u00ebl COGNACQ (Research assistant of the B\u2019AI Global Forum)<strong><br \/>\n<\/strong><\/span><\/p>\n<p><span style=\"font-weight: 400;\"><strong>\u25c7Organizer<br \/>\n<\/strong>B\u2019AI Global Forum, Institute for AI and Beyond at the University of Tokyo<\/span><\/p>\n<p><span style=\"font-weight: 400;\"><strong>\u00a0\u25c7Inquiry<br \/>\n<\/strong><\/span><span style=\"font-weight: 400;\">B&#8217;AI Global Forum Office<br \/>\nEmail: bai.global.forum@gmail.com<br \/>\n<\/span><\/p>\n","protected":false},"excerpt":{"rendered":"<p>\u25c7BAIRAL\uff08B\u2019AI RA League\uff09 BAIRAL is a study group by<\/p>\n<div class=\"continue-reading-wrapper\"><a href=\"https:\/\/baiforum.jp\/en\/events\/en094\/\" class=\"continue-reading\">Continue Reading<i class=\"ion-ios-arrow-right\"><\/i><\/a><\/div>\n","protected":false},"author":7,"featured_media":0,"parent":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[2],"tags":[7,33,62],"class_list":["post-3739","post","type-post","status-publish","format-standard","hentry","category-events","tag-ai","tag-ai-ethics","tag-bairal"],"acf":[],"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/baiforum.jp\/en\/wp-json\/wp\/v2\/posts\/3739","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/baiforum.jp\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/baiforum.jp\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/baiforum.jp\/en\/wp-json\/wp\/v2\/users\/7"}],"replies":[{"embeddable":true,"href":"https:\/\/baiforum.jp\/en\/wp-json\/wp\/v2\/comments?post=3739"}],"version-history":[{"count":7,"href":"https:\/\/baiforum.jp\/en\/wp-json\/wp\/v2\/posts\/3739\/revisions"}],"predecessor-version":[{"id":3751,"href":"https:\/\/baiforum.jp\/en\/wp-json\/wp\/v2\/posts\/3739\/revisions\/3751"}],"wp:attachment":[{"href":"https:\/\/baiforum.jp\/en\/wp-json\/wp\/v2\/media?parent=3739"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/baiforum.jp\/en\/wp-json\/wp\/v2\/categories?post=3739"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/baiforum.jp\/en\/wp-json\/wp\/v2\/tags?post=3739"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}