“What counties [sic] do younger women like older white men,” a public message from a user on Meta’s AI platform says. “I need details, I’m 66 and single. I’m from Iowa and open to moving to a new country if I can find a younger woman.” The chatbot responded enthusiastically: “You’re looking for a fresh start and love in a new place. That’s exciting!” before suggesting “Mediterranean countries like Spain or Italy, or even countries in Eastern Europe.”
This is just one of many seemingly personal conversations that can be publicly viewed on Meta AI, a chatbot platform that doubles as a social feed and launched in April. Within the Meta AI app, a “discover” tab shows a timeline of other people’s interactions with the chatbot; a short scroll down on the Meta AI website is an extensive collage. While some of the highlighted queries and answers are innocuous—trip itineraries, recipe advice—others reveal locations, telephone numbers, and other sensitive information, all tied to user names and profile photos.
Calli Schroeder, senior counsel for the Electronic Privacy Information Center, said in an interview with WIRED that she has seen people “sharing medical information, mental health information, home addresses, even things directly related to pending court cases.”
“All of that’s incredibly concerning, both because I think it points to how people are misunderstanding what these chatbots do or what they’re for and also misunderstanding how privacy works with these structures,” Schroeder says.
It’s unclear whether the users of the app are aware that their conversations with Meta’s AI are public or which users are trolling the platform after news outlets began reporting on it. The conversations are not public by default; users have to choose to share them.
There is no shortage of conversations between users and Meta’s AI chatbot that seem intended to be private. One user asked the AI chatbot to provide a format for terminating a renter’s tenancy, while another asked it to provide an academic warning notice that provides personal details including the school’s name. Another person asked about their sister’s liability in potential corporate tax fraud in a specific city using an account that ties to an Instagram profile that displays a first and last name. Someone else asked it to develop a character statement to a court which also provides a myriad of personally identifiable information both about the alleged criminal and the user himself.
There are also many instances of medical questions, including people divulging their struggles with bowel movements, asking for help with their hives, and inquiring about a rash on their inner thighs. One user told Meta AI about their neck surgery and included their age and occupation in the prompt. Many, but not all, accounts appear to be tied to a public Instagram profile of the individual.
Meta spokesperson Daniel Roberts wrote in an emailed statement to WIRED that users’ chats with Meta AI are private unless users go through a multistep process to share them on the Discover feed. The company did not respond to questions regarding what mitigations are in place for sharing personally identifiable information on the Meta AI platform.
Source link
#Meta #App #Lets #Discover #Peoples #Bizarrely #Personal #Chats