AFP, Monash Uni crowdsource images to train AI to detect child abuse

By on
AFP, Monash Uni crowdsource images to train AI to detect child abuse

Looks to ethical sourcing after Clearview AI controversy.

The Australian Federal Police and Monash University want to create an “ethically-sourced” database of images that can train artificial intelligence algorithms to detect child exploitation.

The project will see the AiLECS (AI for Law Enforcement and Community Safety) Lab try to collect at least 100,000 images from the community over the next six months.

The AiLECS Lab – which conducts ethical research into AI in policing –  is calling for willing adult contributors to populate the “image bank” with photos of themselves from their childhood.

The photos will be used to “recognise the presence of children in ‘safe’ situations, to help identify ‘unsafe’ situations and potentially flag child exploitation material”, the AFP and Monash University said.

In order to maintain the privacy of contributors, email addresses used to submit the images – the only other form of identifying information to be collected – will be stored separately.

AiLECS Lab co-director associate professor Campbell Wilson said that the project was seeking to “build technologies that are ethically accountable and transparent”.

“To develop AI that can identify exploitative images, we need a very large number of children’s photographs in everyday ‘safe’ contexts that can train and evaluate the AI models,” he said.

“But sourcing these images from the internet is problematic when there is no way of knowing if the children in those pictures have actually consented for their photographs to be uploaded or used.”

Wilson said that machine learning models were often fed images that are scraped from the internet or without documented consent for their use, which the AFP found out first-hand last year.

In 2020, the AFP admitted to having briefly trialled Clearview AI, a controversial facial recognition tool that allows uses to search a database of images that have been scraped from the internet.

It was one of four policing agencies in Australia – along with Victoria, Queensland and South Australia – and 2200 globally reported to have used the platform.

The “limited pilot” was conducted by the AFP-led Australian Centre to Counter Child Exploitation (ACCCE) to determine whether it could be used in child exploitation investigations.

Clearview AI was found to have breached Australia’s privacy rules last year following an investigation by the Office of the Australian Information Commissioner (OAIC).

The OAIC later found the AFP had separately failed to comply with its privacy obligations by using Clearview AI.

Last month, the UK’s Information Commissioner’s Office fined Clearview AI more than $13.3 million in the UK and ordered it to delete the data of UK residents from its systems.

Got a news tip for our journalists? Share it with us anonymously here.
Copyright © iTnews.com.au . All rights reserved.
Tags:

Most Read Articles

Log In

  |  Forgot your password?