...

The Download: Combating audio deepfakes, and AI in the classroom


The news: A new technique known as “machine unlearning” could be used to teach AI models to forget specific voices.

How it works: Currently, companies tend to deal with this issue by checking whether the prompts or the AI’s responses contain disallowed material. Machine unlearning instead asks whether an AI can be made to forget a piece of information that the company doesn’t want it to know. It works by taking a model and the specific data to be redacted then using them to create a new model—essentially, a version of the original that never learned that piece of data.

Why it matters: This could be an important step in stopping the rise of audio deepfakes, where someone’s voice is copied to carry out fraud or scams. Read the full story.

—Peter Hall

AI’s giants want to take over the classroom

School’s out and it’s high summer, but a bunch of teachers are plotting how they’re going to use AI this upcoming school year. God help them.

On July 8, OpenAI, Microsoft, and Anthropic announced a $23 million partnership with one of the largest teachers’ unions in the United States to bring more AI into K–12 classrooms. They will train teachers at a New York City headquarters on how to use AI both for teaching and for tasks like planning lessons and writing reports, starting this fall.

But these companies could face an uphill battle. There’s a lack of clear evidence that AI can be a net benefit for students, and it’s hard to trust that the AI companies funding this initiative will give honest advice on when not to use AI in the classroom. Read the full story.

Source link

#Download #Combating #audio #deepfakes #classroom