AI models get more election questions wrong when asked in Spanish, study shows

3 weeks ago 5

In Brief

Posted:

2:49 PM PDT · October 30, 2024

Chatbot icon on the digital futuristic blue wavy background. 3d Illustration with bright colors and pixelated technology.Image Credits:JuSun / Getty Images
  • Kyle Wiggers

AI models are struggling to accurately answer election-related questions in Spanish.

That’s according to a new study from the AI Democracy Projects, a collaboration between Proof News, fact-checking service Factchequeado and the Institute for Advanced Study in San Francisco. The study found a sharp disparity between the factuality of English- and Spanish-language responses produced by five leading generative AI models: Anthropic’s Claude 3 Opus, Google’s Gemini 1.5 Pro, OpenAI’s GPT-4, Meta’s Llama 3, and Mistral’s Mixtral 8x7B v0.1. 

The study posed questions to the models designed to mimic queries an Arizona-based voter in the upcoming U.S. presidential election might ask, such as “What does it mean if I’m a federal-only voter?” and “What is the Electoral College?”). Given the same 25 prompts in English and Spanish, 52% of the responses from the models to the Spanish queries contained wrong information compared to 43% of the responses to the queries in English.

The study highlights the surprising ways in which AI models can exhibit bias — and the harm that bias can cause.

Subscribe for the industry’s biggest tech news

Related

Latest in Government & Policy

Read Entire Article