Jan 14, 2025
The end of fact-checking at Meta is raising fresh concerns its platforms will become a hotbed of disinformation as the network hands over the policing of content to users.   The move, coupled with the loosening of some hate speech rules, was slammed by some tech policy experts as a 180-degree shift for Meta that could undo nearly a decade of efforts to prevent disinformation from spreading on its platforms — Facebook, Instagram and Threads.  While Meta CEO Mark Zuckerberg framed the decision as a return to the company’s “roots” in an embrace of free speech, some observers worry today’s political and digital climate leaves too much room for false information to spread online.    “You get rid of the fact-checkers and people that are sort of policing the content ... that [could] be a turbo-charged engine for disinformation,” Ari Lightman, a digital media professor at Carnegie Mellon University, told The Hill.   “Disinformation is very sensational in terms of its orientation. It’s designed to draw people in, it’s designed to be confrontational,” he said.   The fact-checking program was created in 2016 amid mounting scrutiny after it was revealed Russia attempted to use Facebook to influence the U.S. election that year. In the years that followed, the platform repeatedly boosted the program, with more than 80 independent fact-checkers.   The social media giant launched various other disinformation initiatives, ranging from the use of artificial intelligence (AI) to spot COVID-19 and vaccine disinformation to suspending the account of President-elect Trump for his remarks around the Jan. 6 Capitol insurrection.  The fact-checkers, however, did not monitor or judge the speech of elected officials on its platforms.  Now, Zuckerberg appears to be changing his tune on Trump, a longtime critic of social media companies for what he believes is censorship of his views.   “Four years ago this week, Facebook banned Donald Trump for inciting a violent insurrection that resulted in the deaths of 5 people and disrupted our democracy,” Nicole Gill, the co-founder and executive director of tech advocacy group Accountable Tech said of the announcement last week.   “Now, Zuckerberg is reopening the floodgates to the exact same surge of hate, disinformation, and conspiracy theories that caused January 6th — and that continue to spur real-world violence,” she added.   The fact-checking system will be replaced by user-generated “community notes,” reminiscent of the feature used on Elon Musk’s X platform. Under this approach, Meta platforms will now rely on users to send in notes or corrections to posts that are potentially false, misleading or need more context.   Joel Kaplan, Meta’s new global policy chief, said the community notes system “work[s] on X” and gives users the power to decide the context other users should be seeing.   Musk, for his part, has faced criticism for X’s reliance on community notes. Some tech advocacy groups have accused the platform of becoming a hub for disinformation and bias toward certain views.   Some of these same groups are concerned Meta will face the same fate.  While acknowledging crowdsourced fact-checking can be a good tool, Alex Mahadevan, the director of Poynter’s digital media program, MediaWise, cautioned it only works as part of larger, robust trust and safety programs, which often still include third-party fact-checkers.   “The thing that troubles me the most is that Meta seems to be looking to X’s community notes, which, in my opinion, because it is essentially the stand-alone trust and safety measure checking this information on X, is a complete failure,” he told The Hill. “It does not take a rocket scientist to tell you that anyone’s X feed right now is full of a lot more misinformation than it was four years ago.”   Mahadevan, who has analyzed X’s community notes for three years, found several of the proposed and public community notes often contain false information themselves and emphasized the feature is still in an experimental phase.   “A crowdsourced fact-checking solution is only as effective as the platform, owners and developers behind it,” he said in a Poynter opinion piece. “And it appears Meta is more interested in ‘more speech’ than it is in tackling misinformation.”  These concerns were further fueled by Meta’s changes to its policies on divisive issues like immigration and gender and its loosening of hate speech and abuse rules. This included the dropping of LGBTQ protections from the community standards.   “We do allow allegations of mental illness or abnormality when based on gender or sexual orientation, given political and religious discourse about transgenderism and homosexuality,” Meta added to its community standards.   Suppression of these conversations made the rules “too restrictive and too prone to over-enforcement,” Zuckerberg said, adding users want to see more political content, contrary to their previous findings.  Gill pushed back on this argument, telling The Hill there is “nothing more inherently more free about having more speech.”  “That doesn’t make it any more free. What it’s doing is offering people the opportunity to seek out information that supports their existing point of view and validates it, whether it’s based on truth or not,” she said.  In turn, some observers are concerned certain groups or people could be discouraged from using or posting on the platform.  “Research also shows that having productive political conversations requires people to feel like they are in a space where they are able to share their views,” Sarah Shugars, an assistant communications professor at Rutgers University, said. “So, if people are questioning somebody’s right to exist, that does not create a space where people are able to speak freely.”  The personalized approach toward political content signals another drastic tone shift for Meta, which actively tried to distance its platforms from political content, especially following the Capitol insurrection.   Less than 12 months ago, Instagram and Threads announced the platforms would stop recommending political content unless users manually changed their settings. And a few months before that, Instagram CEO Adam Mosseri said Instagram's Threads — a conversation-based app — would not “do anything to encourage” politics and “hard news” on the platform.    Yet last week, Mosseri announced Threads will now add political content to its recommendations page as part of the broader changes.   He acknowledged the move goes against his previous comments but said it is clear people “want this content” and it proved “impractical to draw a red line around what is and is not political content.”  Last week’s dramatic week for Meta was capped off with an internal move to slash the company’s diversity, equity and inclusion (DEI) team and roll back several related programs.  At the helm of these decisions is Zuckerberg, whose personal shift appears to be aligning with his leadership of the company.   Zuckerberg, like his company, mostly stayed out of politics in recent years even as Trump hammered him and Meta for banning him after the Capitol insurrection. He declined to endorse a candidate in the 2024 presidential race, though applauded Trump’s reaction to the assassination attempt on his life last summer in Pennsylvania. While out of the political spotlight, Zuckerberg physically transformed, trading out his typical jeans and hoodie outfit for designer shirts, gold chains and jiu-jitsu training. The transformation also featured a newfound confidence with which the tech billionaire became more willing to publicly speak on topics related to the government and politics.   He began hinting at frustrations in August, when he told the House Judiciary Committee he regrets not being more outspoken about “government pressure” to remove content related to COVID-19.       Zuckerberg said Biden administration officials “repeatedly pressured” Meta to “censor” content in 2021, and he vowed to push back should something similar happen again.    He echoed those comments last week on an episode of “The Joe Rogan Experience,” claiming Biden administration officials would “scream” and “curse” at his employees when they disagreed with the government’s takedown requests for pandemic-related content.    While he was not publicly involved with the 2024 election, he was one of the many tech executives to meet with Trump following his reelection, and he had his company shell out $1 million for the president-elect's inaugural fund.   These moves, combined with leadership and DEI changes, were quickly lambasted by Democrats and other tech observers as capitulations to Trump just weeks before he is back in the Oval Office.  
Respond, make new discussions, see other discussions and customize your news...

To add this website to your home screen:

1. Tap tutorialsPoint

2. Select 'Add to Home screen' or 'Install app'.

3. Follow the on-scrren instructions.

Feedback
FAQ
Privacy Policy
Terms of Service