
British porn site PornHub has launched testing of an automated tool that “dissuades” users from searching for child abuse material (CSAM). Writes about it Wired.
Since March 2022, every time you enter a query with words or phrases related to illegal content, a special chatbot is launched on the platform. It interrupts an attempt to search for CSAM content and prompts users to get help with their online behavior. During the first 30 days of testing, the tool was activated 173,904 times.
Susie Hargreaves, executive director of the Internet Watch Foundation (IWF), says the problem should be prevented.
“People need to stop looking for material about child sexual abuse and stop and check their behavior,” she said.
The chatbot was developed by the Lucy Faithfull Foundation and the IWF. They taught the system to search for any of 28,000 illegal content terms and millions of combinations.
Upon activation, the tool will ask users a series of questions and notify them that the requested information may be illegal. The chatbot will also offer “confidential and unbiased” support. Those who click on the “help” hint will receive detailed information about the work of the Lucy Faithfull Foundation, including the organization’s phone number and email address.
“The goal is to distract a person or destroy their desire to search for CSAM material, and do it with just a few clicks,” explained IWF Chief Technology Officer Dan Sexton.
According to Donald Findlater, head of Stop It Now, using a chatbot is straightforward and perhaps more fun. After testing the tool in March, 158 people went to the support site. He noted that although the number is “modest”, people have taken an important step.
Representatives of the organizations said that PornHub “volunteered” to participate in the project and was not paid to test the system. The test of the chatbot on a British porn site is expected to last another year before it is evaluated by other researchers.
“The IWF tool is another layer of protection, informing users that there is no illegal content on the platform and directing them to Stop It Now to change their behavior,” said a PornHub spokesperson.
Recall that in June, Australian authorities asked citizens to share their childhood pictures in order to teach AI to identify abuse in photographs.
In July, the UK government supported the idea of scanning users’ smartphones for CSAM materials.
In August 2021, Apple used AI to check iPhone photos for child abuse.
Subscribe to Cryplogger news in Telegram: Cryplogger AI – all the news from the world of AI!
Found a mistake in the text? Select it and press CTRL+ENTER

British porn site PornHub has launched testing of an automated tool that “dissuades” users from searching for child abuse material (CSAM). Writes about it Wired.
Since March 2022, every time you enter a query with words or phrases related to illegal content, a special chatbot is launched on the platform. It interrupts an attempt to search for CSAM content and prompts users to get help with their online behavior. During the first 30 days of testing, the tool was activated 173,904 times.
Susie Hargreaves, executive director of the Internet Watch Foundation (IWF), says the problem should be prevented.
“People need to stop looking for material about child sexual abuse and stop and check their behavior,” she said.
The chatbot was developed by the Lucy Faithfull Foundation and the IWF. They taught the system to search for any of 28,000 illegal content terms and millions of combinations.
Upon activation, the tool will ask users a series of questions and notify them that the requested information may be illegal. The chatbot will also offer “confidential and unbiased” support. Those who click on the “help” hint will receive detailed information about the work of the Lucy Faithfull Foundation, including the organization’s phone number and email address.
“The goal is to distract a person or destroy their desire to search for CSAM material, and do it with just a few clicks,” explained IWF Chief Technology Officer Dan Sexton.
According to Donald Findlater, head of Stop It Now, using a chatbot is straightforward and perhaps more fun. After testing the tool in March, 158 people went to the support site. He noted that although the number is “modest”, people have taken an important step.
Representatives of the organizations said that PornHub “volunteered” to participate in the project and was not paid to test the system. The test of the chatbot on a British porn site is expected to last another year before it is evaluated by other researchers.
“The IWF tool is another layer of protection, informing users that there is no illegal content on the platform and directing them to Stop It Now to change their behavior,” said a PornHub spokesperson.
Recall that in June, Australian authorities asked citizens to share their childhood pictures in order to teach AI to identify abuse in photographs.
In July, the UK government supported the idea of scanning users’ smartphones for CSAM materials.
In August 2021, Apple used AI to check iPhone photos for child abuse.
Subscribe to Cryplogger news in Telegram: Cryplogger AI – all the news from the world of AI!
Found a mistake in the text? Select it and press CTRL+ENTER