Tech

Meta Congratulates Itself for Trying to Moderate Misinformation in Australia

The big tech behemoth is desperate for you to know that it has done the absolute bare minimum on purging its platforms of COVID-19 misinformation.
Mark Zuckerberg
Mark Zuckerberg, chief executive at Meta, waits to begin a joint hearing of the Senate Judiciary and Commerce Committees in 2018. Photo by Andrew Harrer / Bloomberg via Getty Images.

On Monday, Meta told the world it had done a pretty good job delivering on promises to better moderate misinformation in Australia, just weeks after it was revealed the tech behemoth deliberately sought to unleash havoc on the nation’s information ecosystem at the height of the pandemic.

The claim was made as part of a recent transparency report published by DIGI—an Australian not-for-profit tech association founded by a gaggle of Silicon Valley exports—which claimed Meta was able to purge its platforms of more than 180,000 pieces of health-related misinformation in the 2021 calendar year, up on the 110,000 instances of misinformation they say they caught the year before.

Advertisement

According to Meta, more than 11 million instances of health-related misinformation were purged from its platforms around the world, from more than 3,000 accounts, pages and groups that were removed between the beginning of the pandemic and June last year. 

In the report, the leadership team at Meta took the opportunity to congratulate themselves for establishing accurate COVID-19 resources, which the report claims were visited by more than 350 million people in the final quarter of 2021. About 10 percent of them were Australian.

The report arrives less than a month after it was revealed that Facebook tried to deliberately rain chaos on the Australian information ecosystem at the height of the COVID-19 pandemic by shutting down Facebook pages belonging to hospitals, emergency services and charities as part of its blanket “news ban”, in a bid to influence new laws. 

According to the Wall Street Journal, “Facebook documents and testimony filed to US and Australian authorities by whistleblowers” claimed that the social-media giant knowingly pulled the plug on Facebook pages belonging to essential health services, just as the federal government was gearing up to launch its COVID-19 vaccine program. 

The aim was to put pressure on federal legislators, who at the time were set to vote on a new policy framework that would require both Google and Facebook to pay news outlets for use of their content. 

Advertisement

Just five days after Facebook’s news ban, the federal government weakened Australia’s media bargaining code so dramatically that some of the toughest implications included in it no longer really apply to Meta.

The revelation was just the latest in a series of damning reports published by the Journal over the last seven months, which has added to a tranche of other reporting analysis from around the world.

The stories detail a long-standing refusal to come to grips with better protecting its users from nefarious actors, misinformation, and disinformation. 

The results of a study released as recently as February, for instance, found that Facebook had failed to direct users to reliable climate information from posts promoting climate change denial, after promising to do so in May last year. 

Only a few months earlier, it was reported that Meta had knowingly gone soft on drug cartels using Facebook to recruit, train and pay hitmen; and knowingly allowed Instagram to become a platform that caused harm for teen girls using the platform. 

Looking ahead, though, Meta said it will continue to commit to sustained transparency efforts in Australia. In a report, the company said it would offer more transparency on how its content ranking algorithms work and give users more control over the content they see, as well as offer more clarity around social and political advertising. 

Advertisement

The company might not have a choice for much longer, though. 

In March, Australia joined efforts made by the EU and even the US to curb the effects of misinformation online, passing new laws that would allow the Australian media regulator to force Meta to share data about how they handle misinformation and disinformation.

The new laws come as part of a response to an investigation launched by the media regulator, which found that 80 percent of Australian adults had been exposed to COVID-19 misinformation of some kind, and 76 percent of adults thought it was up to platforms to do something about it.

Follow John on Twitter.

Read more from VICE Australia.