REPORTS

Report on the lecture by Phebe Vayanos “Towards Robust, Interpretable, and Fair Social and Public Health Interventions”

Jenna Stallard (1st year, HS III (PEAK), University of Tokyo); Xiao Zhang (1st year, NS II (PEAK), University of Tokyo)

・Date and Venue: August 27, 2021, 18:00-19:30 (JST) @Zoom
・Language: English
(Click here for details on the event)

On the 27th of August, Dr. Phebe Vayanos joined B’AI Forum to give a talk and Q&A session with the title “Toward Efficient, Interpretable, and Fair Social Interventions”. This was the second of a three-part series about AI and social justice. The session started with an introduction of Dr. Vayanos by Dr. Yuko Itatsu, the Associate Director of the B’AI Forum and organizer of the Forum’s AI and Social Justice summer program, and who also moderated the event. This was followed by opening remarks by Dr. Kaori Hayashi, the Director of the B’AI Forum and Executive Vice President of the University of Tokyo, who spoke about the vision of the Forum.

 

Dr. Vayanos began the talk by outlining the issue of homelessness in Los Angeles. An average of 66,436 people are sleeping on the street each night. She then introduced the racial dimension of the issue, explaining that while only 9% of people in LA are black, they make up 40% of the homeless population in the city. In order to decide how to prioritise housing resources for vulnerable individuals, a tool called the VI-SPDAT (Vulnerability Index – Service Prioritisation Decision Assistance Tool) has been used to assign a score to individuals. Based on how high their score is, they will be recommended to either permanent supportive housing, rapid (temporary) rehousing, or assistance services only. However, this system is not tied to outcomes; even though some groups of people may have a higher chance of returning to homelessness, this information is not utilised in the resource allocation process. We also do not know whether those who score higher (indicating higher vulnerability) will actually benefit most from permanent supportive housing. Additionally, the system involves queuing, which means that those with very high VI-SPDAT scores may still have to wait a substantial amount of time before receiving any housing resources. This waiting time on the street leads to higher chances of exposure to violence or drug use.

 

To provide a better solution to the allocation problem, Dr. Vayanos and her team created an algorithm that could take a number of factors into account. Policymakers would be able to choose which factors the AI would take into account when allocating resources, therefore tailoring the AI to be fair in the context it would be used in. The algorithm allows policymakers to understand how the AI came to a certain allocation decision, which is termed ‘interpretability’. This resulted in an efficient algorithm that can prioritise outcomes when allocating resources, as well as reduce the waiting time compared to the current model.

 

In tackling social issues with AI, we learned computer scientists never work in isolation. In the absence of experts in humanities and social sciences, it would not be possible for AI scientists to fully understand data bias and its ethical implications. In fact, Dr. Vayanos mentioned that she had never worked on a program without consulting closely with domain experts. This interdisciplinary collaboration needs to become an industry standard in the future, so that the fairness of the designed systems is ensured.

 

As society continues to change, our definition of fairness will also evolve. It cannot be guaranteed that a system is always fair. Therefore, we should always be prepared for changes and to make adjustments accordingly when deploying AI.

 

We also discussed the reliability of data, as when there are inaccuracies in data the AI may come to a decision that is not ‘fair’ in reality. Ideally, the developers of AI should be transparent about the possibility of the AI making mistakes. This is crucial to overcoming automation bias, where we trust the word of the machine over our own reasoning because we believe that the machine cannot be wrong.

 

We were impressed with the great potential of AI applications in addressing many challenges we are confronted with. For example, AI is used to determine the prioritization in the allocation of scarce medical resources, such as ventilators, hospital beds, and ICUs, during the COVID-19 pandemic. Organ donors and recipients are successfully matched for transplant with the help of AI algorithms. To the extent that AI does not deprive us of the right to make our own decisions, more applications of AI are sure to be explored in the future.