...

The Download: OpenAI’s caste bias problem, and how AI videos are made


Caste bias is rampant in OpenAI’s products, including ChatGPT, according to an MIT Technology Review investigation. Though CEO Sam Altman boasted about India being its second-largest market during the launch of GPT-5 in August, we found that both this new model, which now powers ChatGPT, as well as Sora, OpenAI’s text-to-video generator, exhibit caste bias. This risks entrenching discriminatory views in ways that are currently going unaddressed. 

Mitigating caste bias in AI models is more pressing than ever. In contemporary India, many caste-oppressed Dalit people have escaped poverty and have become doctors, civil service officers, and scholars; some have even risen to become the president of India. But AI models continue to reproduce socioeconomic and occupational stereotypes that render Dalits as dirty, poor, and performing only menial jobs. Read the full story.

—Nilesh Christopher

MIT Technology Review Narrated: how do AI models generate videos?

It’s been a big year for video generation. The downside is that creators are competing with AI slop, and social media feeds are filling up with faked news footage. Video generation also uses up a huge amount of energy, many times more than text or image generation.

With AI-generated videos everywhere, let’s take a moment to talk about the tech that makes them work.

This is our latest story to be turned into a MIT Technology Review Narrated podcast, which we’re publishing each week on Spotify and Apple Podcasts. Just navigate to MIT Technology Review Narrated on either platform, and follow us to get all our new content as it’s released.

The must-reads

Source link

#Download #OpenAIs #caste #bias #problem #videos