{"id":3363,"date":"2025-01-28T14:46:05","date_gmt":"2025-01-28T05:46:05","guid":{"rendered":"https:\/\/baiforum.jp\/en\/?p=3363"},"modified":"2025-02-14T17:34:27","modified_gmt":"2025-02-14T08:34:27","slug":"en088","status":"publish","type":"post","link":"https:\/\/baiforum.jp\/en\/events\/en088\/","title":{"rendered":"The 4th BAIRAL Research Meeting for Fiscal Year 2024 <br> \u201cAdvancing Mental Health Monitoring: A Deep Learning Framework for Multimodal Emotion Recognition\u201d"},"content":{"rendered":"<p><strong>\u25c7BAIRAL\uff08B\u2019AI RA League\uff09<\/strong><\/p>\n<p><span style=\"font-weight: 400;\">BAIRAL is a study group by young research assistants (RA) of the B\u2019AI Global Forum of the Institute for AI and Beyond at the University of Tokyo. Aiming to achieve gender equality and a guarantee of rights for minorities in the AI era, this study group examines relationships between digital information technology and society. BAIRAL organizes research meetings every other month with guest speakers in a variety of fields.<\/span><\/p>\n<p>&nbsp;<\/p>\n<p><strong>\u25c7Date &amp; Venue<br \/>\n<\/strong><\/p>\n<p><span style=\"font-weight: 400;\">\u30fb<\/span><span style=\"font-weight: 400;\">Date: Thursday, Feb 20, 2025, <\/span><span style=\"font-weight: 400;\">17:00 to 18:30<\/span><span style=\"font-weight: 400;\"> (JST)<br \/>\n<\/span><span style=\"font-weight: 400;\">\u30fb<\/span><span style=\"font-weight: 400;\">Language: English<br \/>\n<\/span><span style=\"font-weight: 400;\">\u30fbFormat: Online &#8211; Zoom meeting (No registration required)<br \/>\n<\/span><a style=\"font-size: 1rem;\" href=\"https:\/\/u-tokyo-ac-jp.zoom.us\/j\/81781587440?pwd=VG49FYhndanKOXPIxNkEUmelAz5qAp.1\" target=\"_blank\" rel=\"noopener\">https:\/\/u-tokyo-ac-jp.zoom.us\/j\/81781587440?pwd=VG49FYhndanKOXPIxNkEUmelAz5qAp.1<\/a><\/p>\n<p><span style=\"font-weight: 400;\">Meeting ID: 817 8158 7440\/Passcode: 250220<\/span><\/p>\n<p><strong>\u25c7Guest Speaker<\/strong><\/p>\n<p><span style=\"font-weight: 400;\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-medium wp-image-3368\" src=\"https:\/\/baiforum.jp\/en\/wp-content\/uploads\/2025\/01\/Meishu-Song-221x300.jpg\" alt=\"\" width=\"221\" height=\"300\" srcset=\"https:\/\/baiforum.jp\/en\/wp-content\/uploads\/2025\/01\/Meishu-Song-221x300.jpg 221w, https:\/\/baiforum.jp\/en\/wp-content\/uploads\/2025\/01\/Meishu-Song.jpg 508w\" sizes=\"auto, (max-width: 221px) 100vw, 221px\" \/><\/span><\/p>\n<p><span style=\"font-weight: 400;\">Dr. Meishu Song (Graduate School of Education, University of Tokyo)<\/span><\/p>\n<p><strong>\u25c7Speaker Bio\u00a0<\/strong><\/p>\n<p><span style=\"font-weight: 400;\">Meishu Song is a Research Scientist specializing in multimodal AI systems and mental health informatics. Her research lies at the intersection of deep learning, multimodal understanding, and healthcare applications, focusing on developing innovative solutions for mental health monitoring. She pioneered the development of personalized macro-micro frameworks for emotion recognition, achieving significant improvements in real-world applications.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Song&#8217;s work has made substantial contributions to multitask learning and multimodal fusion techniques, notably by developing the Dynamic Restrained Uncertainty Weighting methodology. Her research has been published in prestigious venues including ICASSP and JMIR Mental Health. She has successfully translated her academic research into practical applications, leading to the development of mental healthcare AI product.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">As a Cofounder and Research Scientist at SemoAI, she leads efforts to bridge the gap between advanced AI technologies and accessible mental healthcare solutions.<\/span><\/p>\n<p><strong>\u25c7Abstract<\/strong><\/p>\n<p><span style=\"font-weight: 400;\">The lecture will present an innovative approach to daily mental health monitoring through multimodal deep learning analysis of speech and physiological signals. The speaker will introduce two comprehensive datasets: the Japanese Daily Speech Dataset (JDSD), which will comprise 20,827 speech samples from 342 participants, and the Japanese Daily Multimodal Dataset (JDMD), which will contain 6,200 records of Zero Crossing Mode (ZCM) and Proportional Integration Mode (PIM) signals from 298 participants. These datasets will have been collected in naturalistic settings using non-intrusive wearable devices and smartphones.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">The core innovation to be discussed will be a macro-micro framework that synthesizes global emotional patterns with individual-specific characteristics through a personalized crossmodal transformer mechanism. This architecture will also incorporate a novel Dynamic Restrained Uncertainty Weighting technique for multimodal fusion and loss balancing. The framework is expected to demonstrate substantial improvement in emotion recognition accuracy, achieving a Concordance Correlation Coefficient (CCC) of 0.503, significantly outperforming the baseline of 0.281.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">By leveraging advanced deep learning techniques and multimodal data integration, the proposed system will aim to provide continuous, personalized emotional state assessment while maintaining ecological validity. This work will address critical challenges in mental health monitoring by offering a scalable, data-driven approach that bridges the gap between laboratory-based assessment and real-world applications, potentially transforming the approach to mental healthcare delivery.<\/span><\/p>\n<p><strong>\u25c7Organizer<br \/>\n<\/strong>B\u2019AI Global Forum, Institute for AI and Beyond at the University of Tokyo<\/p>\n<p><strong>\u25c7Inquiry<br \/>\n<\/strong>Priya Mu (Research assistant of the B\u2019AI Global Forum)<br \/>\npriya-mu[at]g.ecc.u-tokyo.ac.jp (Please change [at] to @)<\/p>\n","protected":false},"excerpt":{"rendered":"<p>\u25c7BAIRAL\uff08B\u2019AI RA League\uff09 BAIRAL is a study group by<\/p>\n<div class=\"continue-reading-wrapper\"><a href=\"https:\/\/baiforum.jp\/en\/events\/en088\/\" class=\"continue-reading\">Continue Reading<i class=\"ion-ios-arrow-right\"><\/i><\/a><\/div>\n","protected":false},"author":7,"featured_media":0,"parent":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[2],"tags":[62,341,340,299,339],"class_list":["post-3363","post","type-post","status-publish","format-standard","hentry","category-events","tag-bairal","tag-emotion-recognition","tag-healthcare-apps","tag-mental-health","tag-multimodal-framework"],"acf":[],"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/baiforum.jp\/en\/wp-json\/wp\/v2\/posts\/3363","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/baiforum.jp\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/baiforum.jp\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/baiforum.jp\/en\/wp-json\/wp\/v2\/users\/7"}],"replies":[{"embeddable":true,"href":"https:\/\/baiforum.jp\/en\/wp-json\/wp\/v2\/comments?post=3363"}],"version-history":[{"count":7,"href":"https:\/\/baiforum.jp\/en\/wp-json\/wp\/v2\/posts\/3363\/revisions"}],"predecessor-version":[{"id":3386,"href":"https:\/\/baiforum.jp\/en\/wp-json\/wp\/v2\/posts\/3363\/revisions\/3386"}],"wp:attachment":[{"href":"https:\/\/baiforum.jp\/en\/wp-json\/wp\/v2\/media?parent=3363"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/baiforum.jp\/en\/wp-json\/wp\/v2\/categories?post=3363"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/baiforum.jp\/en\/wp-json\/wp\/v2\/tags?post=3363"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}