Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

Meta wants X-style community annotations to replace fact-checkers


Chris Valance

Senior technology reporter

Getty Images owner Meta Mark ZuckerbergGetty Images

Meta owner Mark Zuckerberg

As flames ripped through large parts of Los Angeles this month, so did fake news.

Social media posts promoted wild conspiracies about the fire, with users sharing hoax videos and misidentifying innocent people as looters.

It brought into sharp focus a question that has plagued the social media age: What’s the best way to contain and correct potentially igniting sparks of misinformation?

It’s a debate centered on Mark Zuckerberg, Meta’s chief executive.

Shortly after the January 6 riots at the Capitol in 2021, which were fueled by false claims of a rigged US presidential election, Zuckerberg testified to Congress. The billionaire boasted about Meta’s “industry-leading fact-checking program.”

It brought in, he noted, 80 “independent third-party fact-checkers” to curb misinformation on Facebook and Instagram.

Four years later, that system is no longer something to brag about.

“Fact checkers have simply been too politically biased and have destroyed more trust than they have created, especially in the US.” Mr. Zuckerberg said earlier in January.

Their replacement, he said, would be something completely different: a system inspired by X”community notes“, where users, not experts, judge accuracy.

Many pundits and fact-checkers questioned Zuckerberg’s motives.

“Mark Zuckerberg was clearly coddling the incoming administration and Elon Musk,” Alexios Mantzarlis, director of the Safety, Trust and Security Initiative at Cornell Tech, told the BBC.

Mr. Mantzarlis is also deeply critical of the decision to do away with fact-checkers.

But, like many experts, he makes another point that has perhaps been lost in the storm of criticism facing Meta: that, in principle, community-style annotation systems could be part of the solution to disinformation.

Bird watching

Adopting a fact-checking system inspired by an Elon-Musk-owned platform was always going to raise concerns. The world’s richest man is regularly accused of using his X account to amplify misinformation and conspiracy theories.

But the system precedes its ownership.

Birdwatch, as it was then known, was launched in 2021 and was inspired by Wikipedia, which is written and edited by volunteers.

Screenshot of Meta Mark Zuckerberg CEO of Meta announcing changes to fact checkingMeta screenshot

Mark Zuckerberg announced the changes in an online video

Like Wikipedia, community notes rely on unpaid contributors to correct misinformation.

Contributors appreciate corrective notes under false or misleading posts and, over time, some users acquire the ability to write them. According to the platform, this group of contributors is now almost a million strong.

Mantzarlis — who himself once ran a “crowd-sourced” fact-checking project — argues that this kind of system potentially allows platforms to “get more fact-checking, more input, faster.”

One of the main attractions of community-style annotation systems is their ability to scale: as a platform’s user base grows, so does the pool of volunteer contributors (if you can convince them to participate). .

According to X, community notes produce hundreds of fact checks per day.

By contrast, Facebook’s expert fact-checkers can manage fewer than 10 a day, suggests an article from Jonathan Stray of the UC Berkeley Center for Human Compatible AI and journalist Eve Sneider.

And one the study suggests Community notes can provide good quality fact-checking: an analysis of 205 notes about Covid found that 98% were accurate.

A note attached to a hoax post can also organically reduce its viral spread by more than half, says X, and research suggests they also increase the chance that the original poster will delete the tweet by 80%.

Keith Coleman, who oversees community notes for X, argues that Meta is moving to a more capable fact-checking program.

“Community notes already cover a much wider range of content than previous systems,” he told me.

“It’s rarely mentioned. I see stories that say, ‘Meta ends fact-checking program,'” he said.

“But I think the real story is, ‘Meta replaces the existing fact-checking program with an approach that can scale to cover more content, respond faster and be trusted across the political spectrum.’

Checking the fact checkers

But of course, Zuckerberg didn’t just say that community notes were a better system — he actively criticized fact-checkers, accusing them of “bias.”

In doing so, he was echoing a long-held belief among American conservatives that Big Tech is censoring their views.

Others argue that fact-checking will inevitably censor controversial views.

Silkie Carlo, director of UK civil liberties group Big Brother Watch – which campaigned against alleged censorship of David Davis MP from YouTube – told the BBC that claims of Big Tech bias have come from across the political spectrum.

Centralized fact-checking by platforms risks “stifling valid reporting on controversial content”, she told the BBC, and also leads users to mistakenly believe that all the posts they are reading are “verified truth”.

But Baybars Orsek, managing director of Logically Facts, which provides fact-checking services for Meta in the UK, argues that professional fact-checkers can target the most dangerous misinformation and identify emerging “harmful narratives”..

Only community-driven systems lack the “consistency, objectivity and expertise” to address the most damaging misinformation, he wrote.

Professional fact checkers and many experts and researchers, strongly oppose claims of bias. Some argue that the fact checkers simply lost the trust of many conservatives.

A belief that Mr. Mantzarlis claims it was deliberately undermined.

“Fact checkers started to become arbiters of truth in a fundamental way that upset politically motivated partisans and people in power and suddenly the armed attacks were against them,” he said.

Trust in the algorithm

The solution X uses in an effort to keep community notes reliable across the political spectrum is to take a key part of the process out of human hands by relying on an algorithm.

Algorithms are used to select which notes are displayed, and to ensure that they are useful to a variety of users.

In very simple terms, according to X, this “bridging” algorithm selects proposed notes that are rated as useful by volunteers who would normally disagree with each other.

The result, he argues, is that the notes are viewed positively across the political spectrum. This is confirmed, according to X, by regular internal testing. Some independent explorative also supports this view.

says Meta notes of her community GRADING SYSTEM it will require agreement between people with a range of viewpoints to help prevent biased assessments, “like they do in X”.

But this widespread acceptance is a high hurdle to reach.

Research shows that more than 90% of proposed community notes are never used.

This means that correct notes may go unused.

But according to X, displaying more annotations would undermine the goal of only displaying annotations that will be useful to most users and would reduce trust in the system.

‘More bad things’

Even after the fact-checkers are gone, Meta will still employ thousands of moderators who remove millions of content every day, such as graphic violence and child sexual exploitation material, that violate the platform’s rules.

But Meta is relaxing its rules around some divisive political topics like gender and immigration.

Mark Zuckerberg admitted the changes, designed to reduce the risk of censorship, meant it was “I’ll catch less bad things”.

This, some experts argue, was the most disturbing aspect of Meta’s announcement.

The co-chairman of Meta’s Supervisory Board told the BBC had “big problems” with what Mr. Zuckerberg.

So what happens from here?

Details of Meta’s new plans to combat disinformation are scarce. In principle, some experts believe that community annotation systems can be useful – but many also feel that they should not be a substitute for fact-checkers.

Community notes are a “fundamentally legitimate approach”, writes Professor Tom Stafford of the University of Sheffield, but platforms still need professional fact-checkers, he believes.

“Crowd-sourcing can be a useful component of (an) information moderation system, but it should not be the only component.”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

Show Buttons
Hide Buttons