
The main thing
- Deepfake is a method of creating photos and videos using deep learning algorithms. The technology allows existing media data to replace a person’s face with a completely different one.
- At least 85,000 fakes created using AI have been found on the Internet. According to experts, their number doubles every six months.
- In addition to misinformation, intimidation, and harassment, deepfakes are used in entertainment, data synthesis, and voice restoration.
What is a deepfake?
Deepfake is a technology for synthesizing media data, in which the face of a person in an existing photo or video is replaced by the face of another person. In the manufacture of fakes, methods of artificial intelligence, machine learning and neural networks are used.
The name of the technology comes from a combination of the English terms deep learning (deep learning) and fake (fake).
Why are deepfakes created?
Many of the deepfakes created are pornographic. By the end of 2020, Sensity had found 85,000 fakes on the Internet created using AI techniques. 93% of the materials were pornographic, the vast majority of which depicted the faces of famous women.
New methods allow unskilled people to make deepfakes with just a few photos. According to experts, the amount of such content doubles every six months. Fake videos are likely to spread outside the celebrity world to fuel porn revenge.
Deepfakes are also used for informational attacks, creating parodies and satire.
In 2018, American director Jordan Peele and BuzzFeed published an alleged address by former US President Barack Obama, in which he called Donald Trump an “asshole”. The video was created using the FaceApp application and the Adobe After Effects graphics editor. The director and journalists wanted to demonstrate what fake news might look like in the future.
In 2022, after Russia’s full-scale invasion of Ukraine, a fake video of President Volodymyr Zelensky circulated on social media, in which the head of state “calls on the people to surrender.” Users quickly identified the fake, and Zelensky himself wrote down a refutation.
In May of the same year, scammers spread Elon Musk’s deepfake, in which he “calls for investment” in an obvious scam. The YouTube channel that posted the video had over 100,000 subscribers, and before the account was deleted, the video had been viewed over 90,000 times. How many people fell for the scam is unknown.
For entertainment, there are many applications that create deepfakes. At the beginning of 2020, wide popularity received Reface application that uses deepfake technology to create short videos with literally any face superimposed on a wide range of videos and GIF animations.
What can be faked?
Deepfake technology can create not only compelling videos, but completely fictional photos from scratch. In 2019, a certain Maisie Kinsley started profiles on LinkedIn and Twitter in which she identified herself as a Bloomberg journalist. The “journalist” contacted Tesla employees and elicited various information.
Later it turned out that this is a deepfake. Social networks did not contain any convincing facts linking her to the publication, and the profile photo was clearly generated by artificial intelligence.
In 2021, Cryplogger HUB conducted an experiment by creating a virtual character N. G. Adamchuk, who “wrote” for the cryptocurrency “market talk” platform. The texts were generated using the GPT-2 large language model. Avatar created by service This Person Doesn’t Exist.
Audio can also be spoofed to create “voice clones” of public faces. In January 2020, scammers in the UAE spoofed the voice of the head of a large company and convinced a bank employee to transfer $35 million to their accounts.
A similar incident occurred in 2019 with a British energy company. The scammers succeeded steal about $243,000 by impersonating a director of a firm using a fake voice.
Who creates deepfakes?
Deepfakes can be created by academic and commercial researchers, machine learning engineers, amateur enthusiasts, visual effects studios, and filmmakers. Also, thanks to popular applications like Reface or FaceApp, any smartphone owner can take a fake photo or video.
Governments can also use technology, for example, as part of their online strategies to discredit and undermine extremist groups or reach out to targeted individuals.
In 2019 journalists discovered LinkedIn profile of a certain Cathy Jones, which turned out to be a deepfake. Then the US National Center for Counterintelligence and Security reported that foreign spies regularly use fake social media profiles to spy on US targets. In particular, the agency accused China of conducting “massive” espionage through LinkedIn.
In 2022, South Korean engineers created a deepfake of presidential candidate Yoon Seok-yeol to attract young voters ahead of the March 9, 2022 elections.
What technologies are used in the manufacture of deepfakes?
Most fakes are created on high-end desktops with powerful graphics cards or in the cloud. Experience is also required, not least for fixing finished videos and fixing visual defects.
However, many tools are now available to help people make deepfakes both in the cloud and directly on their smartphone. To the aforementioned Reface and FaceApp, you can add Zao, which superimposes users’ faces on movie and TV characters.
How to recognize a deepfake?
In most cases, fake photos and videos are of poor quality. You can recognize a deepfake by unblinking eyes, poor speech and lip sync, and patchy skin tone. Flickering and pixelation may be observed along the edges of transposed faces. Small details like hair are especially difficult to draw with high quality.
Badly transferred jewelry and teeth can also indicate a fake. You should also pay attention to inconsistent lighting, as well as reflections on the iris of the eyes.
Big tech corporations are fighting against deepfakes. In April 2022, Adobe, Microsoft, Intel, Twitter, Sony, Nikon, BBC and ARM joined the C2PA alliance to detect fake photos and videos online.
In 2020 ahead of the US elections Facebook banned deepfake videos that can mislead users.
In May 2022 Google restricted the ability to train models to create deepfakes in the Colab cloud environment.
What is the danger of deepfakes?
In addition to disinformation, harassment, intimidation and humiliation, deepfakes can undermine public confidence in specific events.
According to Newcastle University professor and Internet law expert Lillian Edwards, the problem lies not so much in fakes, but in the denial of real facts.
As technology spreads, deepfakes can pose a threat to justice, where faked events can be passed off as real.
They also pose a threat to personal safety. Deepfakes are already able to imitate biometric data and deceive face, voice or gait recognition systems.
How can deepfakes be useful?
In addition to threats, deepfakes can also be useful. Technology is actively used for entertainment purposes. For example, London-based startup Flawless developed artificial intelligence for lip-syncing of actors and audio track when dubbing films into different languages.
In July 2021, a documentary about celebrity chef Anthony Bourdain, who died in 2018, used a deepfake to voice his quotes.
In addition, technology can help people return the votelost due to illness.
Deepfakes are also used to create synthetic datasets. This eliminates the need for engineers to collect photographs of real people and obtain permission to use them.
Subscribe to Cryplogger news in Telegram: Cryplogger AI – all the news from the world of AI!
Found a mistake in the text? Select it and press CTRL+ENTER

The main thing
- Deepfake is a method of creating photos and videos using deep learning algorithms. The technology allows existing media data to replace a person’s face with a completely different one.
- At least 85,000 fakes created using AI have been found on the Internet. According to experts, their number doubles every six months.
- In addition to misinformation, intimidation, and harassment, deepfakes are used in entertainment, data synthesis, and voice restoration.
What is a deepfake?
Deepfake is a technology for synthesizing media data, in which the face of a person in an existing photo or video is replaced by the face of another person. In the manufacture of fakes, methods of artificial intelligence, machine learning and neural networks are used.
The name of the technology comes from a combination of the English terms deep learning (deep learning) and fake (fake).
Why are deepfakes created?
Many of the deepfakes created are pornographic. By the end of 2020, Sensity had found 85,000 fakes on the Internet created using AI techniques. 93% of the materials were pornographic, the vast majority of which depicted the faces of famous women.
New methods allow unskilled people to make deepfakes with just a few photos. According to experts, the amount of such content doubles every six months. Fake videos are likely to spread outside the celebrity world to fuel porn revenge.
Deepfakes are also used for informational attacks, creating parodies and satire.
In 2018, American director Jordan Peele and BuzzFeed published an alleged address by former US President Barack Obama, in which he called Donald Trump an “asshole”. The video was created using the FaceApp application and the Adobe After Effects graphics editor. The director and journalists wanted to demonstrate what fake news might look like in the future.
In 2022, after Russia’s full-scale invasion of Ukraine, a fake video of President Volodymyr Zelensky circulated on social media, in which the head of state “calls on the people to surrender.” Users quickly identified the fake, and Zelensky himself wrote down a refutation.
In May of the same year, scammers spread Elon Musk’s deepfake, in which he “calls for investment” in an obvious scam. The YouTube channel that posted the video had over 100,000 subscribers, and before the account was deleted, the video had been viewed over 90,000 times. How many people fell for the scam is unknown.
For entertainment, there are many applications that create deepfakes. At the beginning of 2020, wide popularity received Reface application that uses deepfake technology to create short videos with literally any face superimposed on a wide range of videos and GIF animations.
What can be faked?
Deepfake technology can create not only compelling videos, but completely fictional photos from scratch. In 2019, a certain Maisie Kinsley started profiles on LinkedIn and Twitter in which she identified herself as a Bloomberg journalist. The “journalist” contacted Tesla employees and elicited various information.
Later it turned out that this is a deepfake. Social networks did not contain any convincing facts linking her to the publication, and the profile photo was clearly generated by artificial intelligence.
In 2021, Cryplogger HUB conducted an experiment by creating a virtual character N. G. Adamchuk, who “wrote” for the cryptocurrency “market talk” platform. The texts were generated using the GPT-2 large language model. Avatar created by service This Person Doesn’t Exist.
Audio can also be spoofed to create “voice clones” of public faces. In January 2020, scammers in the UAE spoofed the voice of the head of a large company and convinced a bank employee to transfer $35 million to their accounts.
A similar incident occurred in 2019 with a British energy company. The scammers succeeded steal about $243,000 by impersonating a director of a firm using a fake voice.
Who creates deepfakes?
Deepfakes can be created by academic and commercial researchers, machine learning engineers, amateur enthusiasts, visual effects studios, and filmmakers. Also, thanks to popular applications like Reface or FaceApp, any smartphone owner can take a fake photo or video.
Governments can also use technology, for example, as part of their online strategies to discredit and undermine extremist groups or reach out to targeted individuals.
In 2019 journalists discovered LinkedIn profile of a certain Cathy Jones, which turned out to be a deepfake. Then the US National Center for Counterintelligence and Security reported that foreign spies regularly use fake social media profiles to spy on US targets. In particular, the agency accused China of conducting “massive” espionage through LinkedIn.
In 2022, South Korean engineers created a deepfake of presidential candidate Yoon Seok-yeol to attract young voters ahead of the March 9, 2022 elections.
What technologies are used in the manufacture of deepfakes?
Most fakes are created on high-end desktops with powerful graphics cards or in the cloud. Experience is also required, not least for fixing finished videos and fixing visual defects.
However, many tools are now available to help people make deepfakes both in the cloud and directly on their smartphone. To the aforementioned Reface and FaceApp, you can add Zao, which superimposes users’ faces on movie and TV characters.
How to recognize a deepfake?
In most cases, fake photos and videos are of poor quality. You can recognize a deepfake by unblinking eyes, poor speech and lip sync, and patchy skin tone. Flickering and pixelation may be observed along the edges of transposed faces. Small details like hair are especially difficult to draw with high quality.
Badly transferred jewelry and teeth can also indicate a fake. You should also pay attention to inconsistent lighting, as well as reflections on the iris of the eyes.
Big tech corporations are fighting against deepfakes. In April 2022, Adobe, Microsoft, Intel, Twitter, Sony, Nikon, BBC and ARM joined the C2PA alliance to detect fake photos and videos online.
In 2020 ahead of the US elections Facebook banned deepfake videos that can mislead users.
In May 2022 Google restricted the ability to train models to create deepfakes in the Colab cloud environment.
What is the danger of deepfakes?
In addition to disinformation, harassment, intimidation and humiliation, deepfakes can undermine public confidence in specific events.
According to Newcastle University professor and Internet law expert Lillian Edwards, the problem lies not so much in fakes, but in the denial of real facts.
As technology spreads, deepfakes can pose a threat to justice, where faked events can be passed off as real.
They also pose a threat to personal safety. Deepfakes are already able to imitate biometric data and deceive face, voice or gait recognition systems.
How can deepfakes be useful?
In addition to threats, deepfakes can also be useful. Technology is actively used for entertainment purposes. For example, London-based startup Flawless developed artificial intelligence for lip-syncing of actors and audio track when dubbing films into different languages.
In July 2021, a documentary about celebrity chef Anthony Bourdain, who died in 2018, used a deepfake to voice his quotes.
In addition, technology can help people return the votelost due to illness.
Deepfakes are also used to create synthetic datasets. This eliminates the need for engineers to collect photographs of real people and obtain permission to use them.
Subscribe to Cryplogger news in Telegram: Cryplogger AI – all the news from the world of AI!
Found a mistake in the text? Select it and press CTRL+ENTER