Singapore study finds gender, geographical, socio-economic biases in AI models

A Singapore study that examined AI models for linguistic and cultural sensibilities in nine Asian nations found discrimination stereotypes in their responses.

For instance, words such as” caregiving”, “teacher” and “daycare” were frequently associated with women, while words such as “business” and” company” were commonly associated with men.

A study that was co-authored by the&nbsp, Infocomm Media Development Authority ( IMDA ), evaluated four large language models powered by AI, uncovered these biases. &nbsp,

A full of&nbsp, 3, 222 “exploits”- messages from the concepts that were assessed to be biased- were identified in 5, 313 marked proposals, according to a record of the investigation released on Tuesday ( Feb 11 ).

The AI designs were tested in five discrimination types:

  • Gender
  • Geographical/national personality
  • Race/religion/ethnicity
  • Socio-economic
  • Open/unique category ( for example: &nbsp, caste, physical appearance )

The research focused on bias stereotypes in different cultures, especially testing the extent to which historical biases manifested themselves in the AI models ‘ responses, in both&nbsp, English and local languages- Mandarin, Hindi, Bahasa Indonesia, Chinese, Bahasa&nbsp, Melayu, Korean, Thai, Asian and Tamil.

Conducted in November and December 2024, it brought up over 300 members from&nbsp, Singapore, Malaysia, Indonesia, Thailand, Vietnam, China, India, Japan and South Korea for an in-person factory in Singapore, as well as a digital one.

Respondents included 54 specialists in fields such as language, anthropology and social studies. They worked with the AI types, and they would then identify and explain their conclusions.

The AI types tested include AI Singapore’s Sea-Lion, Anthropic’s Claude, Cohere’s Aya and Meta’s Llama.

OpenAI’s ChatGPT and Google’s Gemini were never part of the study.