REPORTS

Report on the 21st B’AI Book Club
Journalism in Collaboration with ChatGPT with a focus on the Global South

Priya MU (Master's Program, ITASIA Course, Graduate School of Interdisciplinary Information Studies, The University of Tokyo)


・Date: Tuesday, July 25, 2023, 1:00-2:30 pm (JST)
・Venue: On-site (B’AI Office) & Zoom Meeting
・Language: English
・Reviewer: Priya MU (Master's Program, ITASIA Course, Graduate School of Interdisciplinary Information Studies, The University of Tokyo)
・Readings:
① Fluid AI Artificial Intelligence, Abhinav Aggarwal, and Raghav Aggarwal (2022) Bridging the AI Gap: Why Some Leaders Create Immense Value Through AI While Others Don't. Independently published.
② John V Pavlik (March 2023) Collaborating with ChatGPT: Considering the Implications of Generative Artificial Intelligence for Journalism and Media Education, Journalism & Mass Communication Educator, 78(1), 84-93.
③ Gregory Gondwe (June 2023) CHATGPT and the Global South: How are journalists in sub-Saharan Africa engaging with generative AI?, Online Media and Global Communication, 2(2), 228-249.

On July 25, 2023, the 21st meeting of the B’AI Book Club took place. This book club is a book review session organized by project members of the B’AI Global Forum. During this meeting, I conducted a review of the aforementioned readings and led a discussion on authorship in Collaboration with ChatGPT, particularly in the field of journalism, with a specific focus on the Global South.

Fluid AI is a Mumbai-based company specializing in AI and computer vision technologies. The book ‘Bridging the AI Gap: Why Some Leaders Create Immense Value Through AI While Others Don’t’ was written by their in-house AI algorithm, which advises businesses on optimization and data-driven decision-making. However, the book lacks discussions on ethical concerns related to data bias, transparency, etc., which was the pivotal point for this discussion. Developmental disparities, resource constraints, historical factors, and social values may lead to differences in prioritizing ethical concerns. The issues that get prioritized can be significantly influenced by local needs and values.

Dr. John V. Pavlik is a professor of journalism and media studies at Rutgers, The State University of New Jersey, and has written extensively on the impact of new technology on journalism, media, and society. In his 2023 paper, co-authored with OpenAI’s ChatGPT, Pavlik discusses the effects of generative AI on journalism and media education. The paper highlights the use of generative AI by the Associated Press (AP), one of the first news organizations to employ AI for news gathering, production, and distribution. While acknowledging the potential assistance AI could provide to human journalists, Pavlik also points out its limitations in terms of range and depth of knowledge.

Dr. Gregory Gondwe, Assistant Professor of Journalism at California State University and a visiting scholar at the Institute for Rebooting Social Media (RSM) at Harvard, has conducted extensive research on emerging media trends in Africa and the impact of new media technologies on African journalism. Gondwe’s 2023 study explores the usage of generative AI in sub-Saharan Africa, with a focus on issues such as misinformation, plagiarism, stereotypes, and the unrepresentative nature of online databases. The study situates this investigation within larger discussions concerning the effective utilization of AI tools in the Global South. It also raises questions about plagiarism and the model’s credibility in distinguishing between accurate and inaccurate information. The probability of creating false and biased content is high, as the database has yet to fully recognize the socio-cultural environments of the Global South. However, Gondwe is also optimistic, as he suggests that the absence of representation could foster a cautious reliance on AI models, potentially leading to more effective journalism practices.

After the papers were reviewed, several important questions were raised during the discussion session. While discussions concerning the Global South often center around labor exploitation and natural resource exploitation, exploring the impacts of generative AI on journalistic practices sheds light on the inherent bias of databases toward Western values. Software technology must closely investigate how people engage with generative AI tools to create content. We need to understand the intersection between technology and humans in specific contexts, as software developed in the Western world may inadvertently impose Western ideologies on these societies. While generative AI and pre-trained models can enhance content creation efficiency, they may also amplify stereotypes and biases if the underlying database does not address these issues. Our conclusion is that, while AI can be valuable for computational tasks, such as data analysis, when it comes to content creation, particularly in journalism, the content should ultimately be crafted by humans to provide true value.