Meta announced in January it would end some content moderation efforts, loosen its rules, and put more emphasis on supporting “free expression.” The shifts resulted in fewer posts being removed from Facebook and Instagram, the company disclosed Thursday in its quarterly Community Standards Enforcement Report. Meta said that its new policies had helped reduce erroneous content removals in the US by half without broadly exposing users to more offensive content than before the changes.
The new report, which was referenced in an update to a January blog post by Meta global affairs chief Joel Kaplan, shows that Meta removed nearly one-third less content on Facebook and Instagram globally for violating its rules from January to March of this year than it did in the previous quarter, or about 1.6 billion items compared to just under 2.4 billion, according to an analysis by WIRED. In the past several quarters, the tech giant’s total quarterly removals had previously risen or stayed flat.
Across Instagram and Facebook, Meta reported removing about 50 percent fewer posts for violating its spam rules, nearly 36 percent less for child endangerment, and almost 29 percent less for hateful conduct. Removals increased in only one major rules category—suicide and self-harm content—out of the 11 that Meta lists.
The amount of content Meta removes fluctuates regularly from quarter to quarter, and a number of factors could have contributed to the dip in takedowns. But the company itself acknowledged that “changes made to reduce enforcement mistakes” was one reason for the large drop.
“Across a range of policy areas we saw a decrease in the amount of content actioned and a decrease in the percent of content we took action on before a user reported it,” the company wrote. “This was in part because of the changes we made to ensure we are making fewer mistakes. We also saw a corresponding decrease in the amount of content appealed and eventually restored.”
Meta relaxed some of its content rules at the start of the year that CEO Mark Zuckerberg described as “just out of touch with mainstream discourse.” The changes allowed Instagram and Facebook users to employ some language that human rights activists view as hateful toward immigrants or individuals that identify as transgender. For example, Meta now permits “allegations of mental illness or abnormality when based on gender or sexual orientation.”
As part of the sweeping changes, which were announced just as Donald Trump was set to begin his second term as US president, Meta also stopped relying as much on automated tools to identify and remove posts suspected of less severe violations of its rules because it said they had high error rates, prompting frustration from users.
During the first quarter of this year, Meta’s automated systems accounted for 97.4 percent of content removed from Instagram under the company’s hate speech policies, down by just 1 percentage point from the end of last year. (User reports to Meta triggered the remaining percentage.) But automated removals for bullying and harassment on Facebook dropped nearly 12 percentage points. In some categories, such as nudity, Meta’s systems were slightly more proactive compared to the previous quarter.
Source link
#Metas #Free #Expression #Push #Results #Content #Takedowns