
American scientists have warned of a growing effort by “enemy states” to create deepfakes. Writes about it The Register.
According to researchers at Northwestern University and the Brookings Institution, there are many tools available today to create AI fakes.
“The ease with which deepfakes can be developed for specific people and purposes, and their rapid dissemination, […] point to a world in which all states and non-state actors can use deepfakes in security and intelligence operations,” the authors of the report said.
According to scientists, AI generators like Stable Diffusion are already being adapted to create false videos. With each iteration of the technology, deepfakes look more realistic and convincing, they added.
According to the report, foreign “adversaries” will use such systems to launch disinformation campaigns and spread fake news. In this way, they will be able to sow confusion, spread propaganda and undermine trust on the Internet, the researchers say.
It is also expected that “enemy states” will be able to use AI in military and intelligence operations as technology improves.
The researchers called on governments around the world to introduce policies to regulate the use of fake AI.
“In the long term, we need a global agreement on the use of deepfakes by defense and intelligence agencies,” said one of the co-authors of the study.
However, this measure still does not guarantee complete security. Experts believe that the development of international rules may be difficult because of “nation-states with veto power.”
“Even if such an agreement is reached, some countries are likely to violate it. Therefore, such an agreement should include a mechanism of sanctions to deter and punish offenders,” the researchers say.
Developing deepfake detection technologies is also not enough, scientists said. According to them, it will be like a game of cat and mouse, similar to that seen with malware.
“When cybersecurity firms discover a new type of malware and develop signatures to detect it, malware developers make ‘tunings’ to bypass the detector,” the report says.
Scientists are sure that sooner or later the “detection-evasion” cycle will reach the point where deepfake detectors will not be able to cope with their flow:
“We may find ourselves in a situation where detection becomes impossible or requires too much computing resources to perform it quickly and in large volumes.”
Recall that in January, researchers from OpenAI warned of the growing threat of using language models to spread disinformation.
In December 2022, the Chinese regulator banned the creation and distribution of deepfakes that “threaten national security.” The rules came into force on January 10, 2023.
Subscribe to Cryplogger news in Telegram: Cryplogger AI – all the news from the world of AI!
Found a mistake in the text? Select it and press CTRL+ENTER

American scientists have warned of a growing effort by “enemy states” to create deepfakes. Writes about it The Register.
According to researchers at Northwestern University and the Brookings Institution, there are many tools available today to create AI fakes.
“The ease with which deepfakes can be developed for specific people and purposes, and their rapid dissemination, […] point to a world in which all states and non-state actors can use deepfakes in security and intelligence operations,” the authors of the report said.
According to scientists, AI generators like Stable Diffusion are already being adapted to create false videos. With each iteration of the technology, deepfakes look more realistic and convincing, they added.
According to the report, foreign “adversaries” will use such systems to launch disinformation campaigns and spread fake news. In this way, they will be able to sow confusion, spread propaganda and undermine trust on the Internet, the researchers say.
It is also expected that “enemy states” will be able to use AI in military and intelligence operations as technology improves.
The researchers called on governments around the world to introduce policies to regulate the use of fake AI.
“In the long term, we need a global agreement on the use of deepfakes by defense and intelligence agencies,” said one of the co-authors of the study.
However, this measure still does not guarantee complete security. Experts believe that the development of international rules may be difficult because of “nation-states with veto power.”
“Even if such an agreement is reached, some countries are likely to violate it. Therefore, such an agreement should include a mechanism of sanctions to deter and punish offenders,” the researchers say.
Developing deepfake detection technologies is also not enough, scientists said. According to them, it will be like a game of cat and mouse, similar to that seen with malware.
“When cybersecurity firms discover a new type of malware and develop signatures to detect it, malware developers make ‘tunings’ to bypass the detector,” the report says.
Scientists are sure that sooner or later the “detection-evasion” cycle will reach the point where deepfake detectors will not be able to cope with their flow:
“We may find ourselves in a situation where detection becomes impossible or requires too much computing resources to perform it quickly and in large volumes.”
Recall that in January, researchers from OpenAI warned of the growing threat of using language models to spread disinformation.
In December 2022, the Chinese regulator banned the creation and distribution of deepfakes that “threaten national security.” The rules came into force on January 10, 2023.
Subscribe to Cryplogger news in Telegram: Cryplogger AI – all the news from the world of AI!
Found a mistake in the text? Select it and press CTRL+ENTER