Twitter has outlined its processes for addressing disinformation and misinformation in Australia, but has stopped publishing granular data of the number of times it removed content or handed customer data to authorities.
The platform defended a suite of “new products and policies” for “protecting user rights and safety” in a report on its compliance with a voluntary industry code for removing false and harmful content in Australia that it published last week.
Twitter’s report was unlike those published by other signatories to the Australian Code of Practice on Misinformation and Disinformation; the company's policies were not complemented with stats on enforcement actions.
Meta reported removing "91,000 pieces of content across Facebook and Instagram in Australia for violating our Harmful Health Misinformation policies,” while Microsoft’s Bing made “defensive search interventions” in response to 45,100 Ukraine-related queries during the 2022 calendar year.
Twitter’s omission of enforcement actions from the industry code report follows its April announcement that it would stop publishing biannual transparency reports with this data.
Twitter published global takedown numbers for last year but stopped breaking them down by country.
In contrast, Google, Meta, and TikTok released transparency reports last month revealing they complied with thousands of Australian government requests to disclose their customers’ data and remove content.
“In Q4 2022, the company embarked on transformational change…Since the technical reporting period also covers a substantial window prior, some examples and data may be unavailable at this time or less relevant given the new foundation forward for the Twitter service," it wrote.
While the Communications Minister and eSafety Commissioner have said Twitter 2.0 has failed to cooperate with Australia’s regulations for detecting and removing harmful content, the report listed “new approaches” aimed at “moving beyond a leave-up or take-down binary.”
“These include but are not limited to community notes, the Twitter Blue subscription, and our open-sourced algorithms,” the report read.
Including users’ corrections to misleading material or disclosing the processes that lead to misinformation and disinformation’s amplification are not sufficient for Australia’s content moderation regime; legislation like the Online Safety Act 2021 and the Telecommunications Act 1997, instead compels companies to block or remove content that agencies deem harmful.
Twitter, which gutted its local content moderation and complaint response teams late last year, said Australian users were volunteering to provide annotations to misleadings tweets.
“Contributors in Australia can leave notes on any tweet and if enough contributors from different points of view rate that as helpful, the note will be publicly shown on a tweet," it wrote.
“Community notes were made visible around the world, including, in Australia, and Twitter started admitting contributors from Australia in January 2023.”
The report includes no case studies, but community notes have made a number of high-profile corrections in Australia; Energy Minister Chris Bowen has been called out for posting inaccurate depictions of nuclear waste disposal and Minister for Housing Colin Brooks has been exposed for misrepresenting other parties’ policies.
“For community notes, Twitter created the ability for tweet authors to request an additional review if they disagree that a community note is “helpful” or provides important context to their tweet," the report stated.
The report also stated that Twitter’s $13-a-month ‘Twitter Blue’, which amplifies subscribers’ tweets, helped fight disinformation.
“In December 2022 Twitter Blue subscriptions became available in Australia. Twitter Blue is one of a range of scalable measures to elevate quality conversations," it said.
“Tweets from verified users will be prioritised in places — helping to fight scams and spam.”
Crikey’s associate editor Cam Wilson cast doubt on the initiative, saying it allowed influence over political discourse to be purchased rather than applying any legitimate checks and balances to it.
He noted that previously banned accounts that had returned to the platform had purchased blue ticks, including the account for Australia’s National Socialist Network.
Twitter also said that by open-sourcing sections of its content ranking algorithm, it had fulfilled the industry code’s obligation to allow users to access information about its “recommender systems and have options relating to content suggested by recommender systems.”
“On GitHub, users can find two new repositories containing the source code for many parts of Twitter, including our recommendations algorithm, which controls the tweets users see on the 'For You' timeline," the report stated.
“We shared more information on our recommendation algorithm post on our engineering blog."