Gov-issued takedowns spiked after new eSafety laws came into force

By on
Gov-issued takedowns spiked after new eSafety laws came into force

De-indexed sites appear to be bulk of removed content.

A spike in the amount of online content Australian authorities removed last year coincided with the chief content moderator gaining new takedown powers. 

Google and TikTok reported a sizeable increase in content items and accounts removed in response to Australian agencies’ directions between 2021 and 2022, with URLs removed from Google Search taking up the lion’s share. 

The impact that the Online Safety Act coming into force on January 23 2022 has not yet been made as clear for other digital services providers. 

Twitter has stopped publishing country-specific takedown data; data on takedowns issued to Meta has not been released yet for the second half of 2022, and Microsoft’s transparency reports are not comparable to the other services due to its different definitions of takedown requests and notices. 

As well as expanding the scope of content that the eSafety Commissioner can remove, the new laws brought search engines into the crosshairs of the online content scheme.

Previously Google and Microsoft Bing had ignored some ‘informal’ requests to voluntarily de-index sites that Commissioner Julie Inman Grant said were harmful; the requests could not be followed up with mandatory notices if ignored. 

In 2019 for example, both search engines refused Grant’s calls to delist a forum, telling her that the site was harmful but not illegal. 

In contrast, Australian authorities’ directions resulted in the delisting of 3840 URLs from Google Search last year, according to the company’s latest transparency report; this was 347 percent more than the 859 URLs removed in 2021.

Google web search removals

Six of eSafety’s requests were used in eight examples of agency-issued takedowns that Google published in its transparency report across 2022.

The other two case studies included a copyright-related court order and a request from the Australian Securities and Investments Commission (ASIC), which has recently ramped up its efforts to remove illegal financial content like crypto scams and unlicensed financial services providers

The examples included eSafety getting six sites that contained videos of the Buffalo terrorist attack delisted, as well as 1278 URLs that violated its image-based abuse scheme for non-consensually shared intimate images. 

According to its 2021-22 annual report [pdf], eSafety had image-based abuse content delisted from search engines when notices to remove it from hosting services providers or the websites themselves weren't complied with. 

eSsafety said that during the reporting period, 88 percent of attempts to remove image-based abuse content from “246 different platforms and services” were successful and “[if] unable to effect removal of the material, we [took] steps to limit the discoverability of the material, typically by having links to it removed from search engine results.”

Google’s transparency report also included examples of eSafety issuing takedowns made under the new adult cyber abuse scheme that the Online Safety Act created. 

The new scheme enables Grant to remove “menacing, harassing or offensive” content directed at adults that previously only people under 18 could ask her to take down by complaining through the cyberbullying scheme.

Although Google’s transparency report did not name the agencies behind the 583 takedown requests and notices issued in 2022, it shared the number sent by different regulatory and enforcement categories of agencies. 

The number sent from an ‘Information Communications Authority’ rose from two (2) in 2021 to 165 in 2022; it is possible that this could also refer to the Australian Communications and Media Authority, (ACMA), however, ACMA typically blocks URLs instead of deindexing them from search engines or removing other individual content items. 

Other authorities listed included police -10 in 2022, compared to two (2) in 2021; court orders directed at third parties - 33 in 2022 and 68 in 2021; and government officials - 21 in 2022 and eight (8) in 2021. 

While requests to remove URLs from Google Search jumped, the number of other types of content items removed from Google in 2022 decreased compared to 2021; removed-Gmail accounts, for example, shrunk from 747 to 227 and removed-YouTube videos shrunk from 371 to 204. 

Another notable change in Google’s 2021 and 2022 transparency reports are the reasons cited for removals.

Between 2021 and 2022, the number of content items that authorities removed for reasons related to “obscenity/nudity” jumped from 68 to 2654, while the number related to “privacy/security” increased from 231 to 1484. 

TikTok 

TikTok’s takedown requests soared by 488 percent from 94 to 553 between 2021 and 2022 respectively.

The 2022 requests resulted in the removal of 728 accounts and 994 other content items.

While the expansion of content that the eSafety Commissioner could remove from the platform during 2022 likely contributed to the spike in the number of government-issued takedowns, TikTok’s user growth during the period may have also played a role. 

It is also difficult to quantify the Online Safety Act’s impact on TikTok’s increase in removals because, unlike Google, TikTok provides no data or case studies of agencies that issue removal requests.

While the eSafety Commissioner publishes the number of removals issued under each regulatory scheme, she does not report which companies are served them.  

Moreover, the eSafety Commissioner is not the only agency that requests content removal from TikTok. 

There is no central register of which agencies request or compel digital services providers to remove content, but some agencies have disclosed details on online removals they have facilitated. 

For example, the Australian Electoral Commission removed unauthorised political advertising from Google, TikTok and Meta [pdf] during last year’s federal election. 

Unprompted removals

The majority of content is proactively taken down by the platforms and search engines themselves rather than as a result of government directions.

In 2021, TikTok removed 12,582 videos posted by Australian users that it said included false medical information [pdf] and Google took down more than 90,000 YouTube videos posted by Australians [pdf] that violated its policies.

The stats are from reports by platforms’ association The Digital Industry Group Inc (DIGI). 

They were published in support of DIGI’s voluntary industry codes for dealing with misinformation and disinformation and to campaign against the government registering its own codes, which would be enforced by ACMA. 

Similarly, the eSafety Commissioner intends to register industry codes later in the year for how platforms deal with illegal content and is currently jostling with them over what minimum requirements will be set for detecting, taking down and preventing the discoverability and amplification of the material. 

Got a news tip for our journalists? Share it with us anonymously here.
Copyright © iTnews.com.au . All rights reserved.
Tags:

Most Read Articles

Log In

  |  Forgot your password?