
Australian Federal Police and Monash University asked users to share their childhood pictures to teach artificial intelligence to detect abuse in photos.
Monash University experts and the AFP are calling for people to contribute to a world first ethically-sourced and managed image bank for research to combat child exploitation. https://t.co/VKBbGXLeEV
— AFP (@AusFedPolice) June 3, 2022
The researchers collect images of people under the age of 17 in safe situations. Photographs should not contain nudity, they say, even in relatively innocuous scenarios like a baby taking a bath.
The images will be assembled into a dataset to train an AI model to distinguish between a minor in a normal environment and an exploited, unsafe situation. The researchers believe that the software will help law enforcement officers quickly identify child sexual abuse material among thousands of photographs being examined, avoiding manual verification.
According to Australian Federal Police Chief Constable Janice Dahlins, artificial intelligence has the potential to identify victims and discover illegal material not previously known to officers.
“In 2021, the Australian Child Exploitation Center received over 33,000 child exploitation reports online, and each report can contain large amounts of images and videos,” he said.
Dahlins added that viewing such material is a time-consuming process. Also, manual analysis can cause psychological stress to investigators.
Crowdsourced photo collection will allow for an unbiased dataset, the researchers say.
“Obtaining images from the Internet is problematic because there is no way to know whether the children in these photographs have indeed consented to the uploading of their images or their use for research,” said AiLECS co-director and associate professor at Monash University Campbell Wilson.
The My Pictures Matter crowdsourcing campaign is open to adults who consent to the use of their photographs. Users also need to provide an email address.
Project leader and lab researcher Nina Lewis emphasized that no other user-identifying information would be collected. Email addresses are stored in a separate database, she added.
“The images used by the researchers cannot reveal any personal information about the people who are depicted,” she said.
Contributors will be provided with updates at each stage of the project and can ask to have their images removed from the dataset if they wish.
Recall that in November 2021, the Australian authorities banned Clearview AI from collecting citizens’ data.
In August, Apple announced plans to roll out a tool to scan user photos for child abuse in iOS, iPadOS, and macOS.
The company later delayed the launch of the feature indefinitely.
Subscribe to Cryplogger news in Telegram: Cryplogger AI – all the news from the world of AI!
Found a mistake in the text? Select it and press CTRL+ENTER