
Interactive and compositional deepfakes are two classes of growing information security threats. About it declared Microsoft Chief Scientist Eric Horwitz.
According to the expert, the developing capabilities of discriminative and generative AI methods are approaching a critical point.
“Advances provide unprecedented tools that state and non-state actors can use to create and disseminate compelling disinformation,” Horwitz wrote.
The scientist believes that the problem arises from the methodology of generative adversarial networks, consisting of two competing elements: a generator and a discriminator. The first is for creating content, and the second is for evaluating its quality.
Horwitz added that over time, the generator will learn to fool the discriminator.
“With this process behind deepfakes, neither pattern recognition methods nor humans will be able to reliably recognize fakes,” he wrote.
The expert emphasized that until now, AI fakes were created and distributed as one-time, stand-alone creations.
“However, we can now expect new forms of compelling deepfakes to emerge that go beyond fixed singleton productions,” he said.
Horvitz suggested several ways to help prepare and protect against the expected increase in counterfeiting. Among them:
- increased demands on journalism and reporting;
- increasing media literacy of people;
- introduction of new authentication protocols to confirm identity;
- developing standards to verify the origin of content;
- constant monitoring.
“It’s important to be vigilant about interactive and compositional deepfakes,” Horwitz concluded.
Recall that in September, scientists presented a method for detecting audio fakes, which measures the differences between samples of organic and synthetic speech.
In August, hackers used a deepfake to scam a listing on Binance. The attackers impersonated the CCO of Patrick Hillmann’s company in a series of video calls with representatives of cryptocurrency projects.
In June, the FBI warned of a growing use of deepfakes in online job interviews.
Subscribe to Cryplogger news in Telegram: Cryplogger AI – all the news from the world of AI!
Found a mistake in the text? Select it and press CTRL+ENTER

Interactive and compositional deepfakes are two classes of growing information security threats. About it declared Microsoft Chief Scientist Eric Horwitz.
According to the expert, the developing capabilities of discriminative and generative AI methods are approaching a critical point.
“Advances provide unprecedented tools that state and non-state actors can use to create and disseminate compelling disinformation,” Horwitz wrote.
The scientist believes that the problem arises from the methodology of generative adversarial networks, consisting of two competing elements: a generator and a discriminator. The first is for creating content, and the second is for evaluating its quality.
Horwitz added that over time, the generator will learn to fool the discriminator.
“With this process behind deepfakes, neither pattern recognition methods nor humans will be able to reliably recognize fakes,” he wrote.
The expert emphasized that until now, AI fakes were created and distributed as one-time, stand-alone creations.
“However, we can now expect new forms of compelling deepfakes to emerge that go beyond fixed singleton productions,” he said.
Horvitz suggested several ways to help prepare and protect against the expected increase in counterfeiting. Among them:
- increased demands on journalism and reporting;
- increasing media literacy of people;
- introduction of new authentication protocols to confirm identity;
- developing standards to verify the origin of content;
- constant monitoring.
“It’s important to be vigilant about interactive and compositional deepfakes,” Horwitz concluded.
Recall that in September, scientists presented a method for detecting audio fakes, which measures the differences between samples of organic and synthetic speech.
In August, hackers used a deepfake to scam a listing on Binance. The attackers impersonated the CCO of Patrick Hillmann’s company in a series of video calls with representatives of cryptocurrency projects.
In June, the FBI warned of a growing use of deepfakes in online job interviews.
Subscribe to Cryplogger news in Telegram: Cryplogger AI – all the news from the world of AI!
Found a mistake in the text? Select it and press CTRL+ENTER