A message appears online during heavy flooding: “This rain no be small o, everywhere don red.” Someone unfamiliar with the phrasing might hesitate. But for people in Nigeria, this message is immediate and clear: the flooding is severe and worsening.
Moments like this happen all the time on digital platforms. People don’t write in perfect, standard English sentences. They share warnings and reactions on platforms like , using the language of everyday life. This means sometimes mixing English with local expressions, slang and expressive language shaped by their communities.
Artificial intelligence systems can . Governments and organisations are increasingly using AI to scan social media, summarise public conversations, and .
But many of these tools struggle to make sense of the way people actually communicate. Local expressions and slang can confuse AI, so important messages are .
When people talk about language barriers, they often mean translation between different languages. But the problem is more subtle. Around the world, people mix languages and local expressions online, a phenomenon that linguists call “code switching”.
has increasingly moved online, but there are fewer climate reporters in the developing world. This limits the depth and availability of information for a huge proportion of the global population, and shapes how climate issues are discussed and understood across different regions.
For instance, a UK social media post might raise an environmental concern using expressions like: “Are roads flooding already? Chuffed to know the council taking the p*ss.” Most AI tools can pick up the sarcasm and frustration aimed at local authorities.
In a country such as Nigeria, people may describe unfolding concerns differently: “Abeg is it October wey rain dey fall like this, but you say the climate no change?” or “River don near our house o! Abeg help, e fit spoil everything!”
Here, slang and express immediate danger and an urgent call for help. Yet AI models , entirely missing the urgency and emotion that is being conveyed.
This matters because most , mainly from North America and Europe. ChatGPT, for example, is instructed on huge amounts of internet text. It doesn’t have beliefs, feelings or awareness. Instead, it generates responses based on patterns it has seen online.
AI reflects the dominant culture in its training data, so carries a . It imitates normal ways of expressing ideas from the societies that produced the texts it has learned from. AI models trained on predominantly English-language texts show a hidden bias that favour western cultural values, particularly when asked in English.
One major reason AI can produce biased outcomes is that it reflects the societal inequalities including differences in race, gender and region that show up in the data it learns from. So, underrepresented voices from communities in developing countries are often .
This bias . In climate crises like floods, heatwaves or other extreme weather, misinterpreted messages could put property and lives at risk...
