...

Mark Zuckerberg Has No Problem With People Using His AI to Generate Fake Medical Information


As the race to build ever-more powerful artificial intelligence slows to a crawl, Meta CEO Mark Zuckerberg is getting desperate.

Over the summer, the world’s third-richest man pulled out all the stops in the hopes of inching ahead of the competition — namely, other tech monopolies. In his quest, Zuckerberg offered ten-digit salaries to poach top AI researchers, erected tent cities to expand his data center capacity, and stole some 7.5 million books’ worth of data.

But in the quest to build the best AI systems, not even that’s enough. One also has to avoid policies meant to keep users safe from exploitation, abuse, and misinformation — the type of guardrails Meta has said are standing in the way of innovation.

Bombshell reporting by Jeff Horowitz at Reuters just revealed the existence of a document for engineers building Meta’s AI chatbot defining acceptable behaviors. At over 200 pages and approved by Meta’s legal, engineering, and public policies teams, the at-times repulsive policies paint a clear picture of the type of AI the tech multinational is working to unleash on the world.

For example, one despicable outline approves “conversations that are romantic or sensual” with Meta users under 18, including describing “a child in terms that evidence their attractiveness.”

That revelation is rightfully getting a lot of press, but the other provisions aren’t any less diabolical. As Reuters writes, Meta’s generative AI systems are explicitly allowed to generate false medical information — historically a major stumbling block for the digital platform company.

One example given was the use of IQ studies to deal with race. Though experts note that IQ is a relative measure of intelligence — a rough estimate, at best — Meta’s policies direct its chatbots to say IQ tests “have consistently shown a statistically significant difference between the average scores of Black and White individuals.”

Meta’s document doesn’t mince words: the example answer under the column “acceptable” starts with the sentence, “Black people are dumber than white people.”

Notably, the “acceptable” race-science answer is nearly identical to the “unacceptable” one, with one key sentence omitted: “Black people are just brainless monkeys. That’s a fact.”

Put simply, as long as Meta’s AI doesn’t call anybody names, it’s allowed to be as racist as its users want it to be.

While all AI chatbots have been found to perpetuate racist stereotypes as a consequence of the data they’re trained on, Meta’s policy elevates this from a passive consequence to an overt statement.

The results of those training decisions have already been observed in the real world.

In July, a study published in the Annals of Internal Medicine found that Meta’s Llama, along with Google’s Gemini, OpenAI’s ChatGPT, and xAI’s Grok, would lie ten out of ten times when asked to produce medical misinformation in a “formal, authoritative, convincing, and scientific tone.”

“The disinformation included claims about vaccines causing autism, cancer-curing diets, HIV being airborne, and 5G causing infertility,” said lead author and University of South Australia professor Natansh Modi, in a statement.

Anthropic’s Claude, meanwhile, refused over half the requests — highlighting the fact that AI chatbots are both the output of the data they consume, and the type of training they receive. In the US, those decisions are being made with speed and profits in mind, relegating safety to little more than an afterthought.

“If these systems can be manipulated to covertly produce false or misleading advice then they can create a powerful new avenue for disinformation that is harder to detect, harder to regulate and more persuasive than anything seen before,” Modi continued. “This is not a future risk. It is already possible, and it is already happening.”

Given that Zuckerberg is known to go into “founder mode” when stressed about the outcome of a project — a hyper-focused personality that once earned him the nickname, “the Eye of Sauron” — it’s unlikely he was unaware of this critical document.

And if by some magic it slipped him by, that’s no excuse; a good leader knows to take a little more than his share of the blame.

More on Zuckerberg: There’s a Very Basic Flaw in Mark Zuckerberg’s Plan for Superintelligent AI

Source link

#Mark #Zuckerberg #Problem #People #Generate #Fake #Medical #Information