
Scholars from Stanford, Oregon State University and Google figured outthat manipulation of photos at the pixel level will not protect against facial recognition systems.
The researchers tested two data poisoning programs, Fawkes and LowKey, which subtly alter images at the pixel level. Such modifications are invisible to the eye, but they can confuse the face recognition system.
Both tools are freely available, which is their main problem, the authors say. Thanks to this, developers of face recognition systems can teach their models to ignore “poisoning”.
“Adaptive Model Training with Black Box Access [программ модификации фотографий] can immediately train a reliable model that is resistant to poisoning,” the scientists said.
According to them, both tools showed poor performance against versions of facial recognition software released within a year of their introduction to the network.
In addition, researchers are afraid of creating better identification algorithms that will initially be able to ignore changes in photographs.
“There is an even simpler defensive strategy: model makers can simply wait for better facial recognition systems that are no longer vulnerable to these specific attacks,” the document says.
The authors of the study emphasized that data modification to prevent biometric identification not only does not provide security, but also creates a false sense of security. This can harm users who would otherwise not post photos online.
The researchers said that the only way to protect user privacy on the Internet is to pass legislation restricting the use of facial recognition systems.
Recall that in May 2021, engineers introduced free Fawkes and LowKey tools to protect against biometric identification algorithms.
In April, startup DoNotPay developed a service to protect images from Photo Ninja facial recognition systems.
Subscribe to Cryplogger news in Telegram: Cryplogger AI – all the news from the world of AI!
Found a mistake in the text? Select it and press CTRL+ENTER

Scholars from Stanford, Oregon State University and Google figured outthat manipulation of photos at the pixel level will not protect against facial recognition systems.
The researchers tested two data poisoning programs, Fawkes and LowKey, which subtly alter images at the pixel level. Such modifications are invisible to the eye, but they can confuse the face recognition system.
Both tools are freely available, which is their main problem, the authors say. Thanks to this, developers of face recognition systems can teach their models to ignore “poisoning”.
“Adaptive Model Training with Black Box Access [программ модификации фотографий] can immediately train a reliable model that is resistant to poisoning,” the scientists said.
According to them, both tools showed poor performance against versions of facial recognition software released within a year of their introduction to the network.
In addition, researchers are afraid of creating better identification algorithms that will initially be able to ignore changes in photographs.
“There is an even simpler defensive strategy: model makers can simply wait for better facial recognition systems that are no longer vulnerable to these specific attacks,” the document says.
The authors of the study emphasized that data modification to prevent biometric identification not only does not provide security, but also creates a false sense of security. This can harm users who would otherwise not post photos online.
The researchers said that the only way to protect user privacy on the Internet is to pass legislation restricting the use of facial recognition systems.
Recall that in May 2021, engineers introduced free Fawkes and LowKey tools to protect against biometric identification algorithms.
In April, startup DoNotPay developed a service to protect images from Photo Ninja facial recognition systems.
Subscribe to Cryplogger news in Telegram: Cryplogger AI – all the news from the world of AI!
Found a mistake in the text? Select it and press CTRL+ENTER