Virginia Tech study reveals geographic bias from ChatGPT in providing local environmental justice information.
A newly published study from Virginia Tech highlights a significant geographic bias in the artificial intelligence (AI) tool ChatGPT, especially concerning environmental justice issues. The research emphasizes the tool’s challenges in providing specific information tailored to local environments, with a noticeable disparity between urban and rural areas.
Exploring the Disparity in Information Accessibility
ChatGPT, a renowned AI chatbot, demonstrates a concerning trend in its ability to deliver detailed, local information on environmental justice. Researchers have observed that densely populated states, such as Delaware and California, experience better access to this localized data. In contrast, sparsely populated regions, like Idaho and New Hampshire, face a notable deficiency, with over 90 percent of their populations unable to receive area-specific information.
The implications of these findings are critical, especially in the context of environmental justice, where local nuances play a vital role. The study points out the necessity for more in-depth research to understand and address these geographic biases. It underscores the importance of ensuring that AI tools like ChatGPT provide equitable and accurate information to all regions, regardless of their population density.
More on the Study
The study, published in “Telematics and Informatics,” involved Assistant Professors Junghwan Kim and Ismini Lourentzou from Virginia Tech, examining ChatGPT’s ability to provide localized information on environmental justice across 3,108 U.S. counties.
Kim, a geospatial data scientist, emphasized the need to understand AI limitations and biases, particularly in urban versus rural information disparity. The study revealed that ChatGPT could provide specific information for only 515 of the 3018 counties, highlighting a significant gap in AI’s ability to address diverse sociodemographic factors like population density and income.
Lourentzou, from the College of Engineering, identified three key areas for future research in large-language models: refining localized knowledge, safeguarding against ambiguous user instructions, and enhancing user awareness of AI strengths and weaknesses. Their work underlines the importance of addressing geographic biases in AI development, especially in sensitive topics like environmental justice.
This revelation about ChatGPT’s geographic biases is part of a larger conversation about potential prejudices inherent in AI technologies. Recently, there have been discussions around political biases in AI, with a separate study by UK and Brazilian researchers highlighting errors and biases in large language models like ChatGPT. These biases could potentially mislead readers, emphasizing the need for ongoing scrutiny and improvement of AI systems.