
A group of Chinese and American researchers has developed algorithm for creating deepfakes CihaNet, eliminating the problem of masking edges when changing faces. This makes the fake look more realistic.
According to the developers, CihaNet does not require large and exhaustive datasets, and training takes place in a matter of days instead of weeks.
For example, in the experiment, the researchers used two popular celebrity image datasets and one NVIDIA Tesla P40 GPU. This configuration made it possible to train the neural network to realistically swap faces in just three days.

The new approach eliminates the need to crudely “insert” the transplanted personality into the target video, which often results in characteristic artifacts that appear at the boundaries of two faces. This made it possible to avoid further post-processing of the video, which significantly saves time and resources, the paper says.

To achieve these results, the researchers used a “hallucination map”. This allows the algorithm to define context much more efficiently and blend faces at a deeper level, they said.

The presented model, in contrast to the method of hard overlaying of masks, can perform face swap between two photographs taken from different angles.

Researchers have not announced plans to release the tool to the public.
Recall that in October, the software developer Adobe introduced tools for animating photos and creating deepfakes.
In September, scientists told how to distinguish fake photos from real ones.
In April, the CEO of Pinscreen said the number of deepfakes on the Internet was doubling every six months.
Subscribe to Cryplogger news on Telegram: Cryplogger AI – all the news from the world of AI!
Found a mistake in the text? Select it and press CTRL + ENTER

A group of Chinese and American researchers has developed algorithm for creating deepfakes CihaNet, eliminating the problem of masking edges when changing faces. This makes the fake look more realistic.
According to the developers, CihaNet does not require large and exhaustive datasets, and training takes place in a matter of days instead of weeks.
For example, in the experiment, the researchers used two popular celebrity image datasets and one NVIDIA Tesla P40 GPU. This configuration made it possible to train the neural network to realistically swap faces in just three days.

The new approach eliminates the need to crudely “insert” the transplanted personality into the target video, which often results in characteristic artifacts that appear at the boundaries of two faces. This made it possible to avoid further post-processing of the video, which significantly saves time and resources, the paper says.

To achieve these results, the researchers used a “hallucination map”. This allows the algorithm to define context much more efficiently and blend faces at a deeper level, they said.

The presented model, in contrast to the method of hard overlaying of masks, can perform face swap between two photographs taken from different angles.

Researchers have not announced plans to release the tool to the public.
Recall that in October, the software developer Adobe introduced tools for animating photos and creating deepfakes.
In September, scientists told how to distinguish fake photos from real ones.
In April, the CEO of Pinscreen said the number of deepfakes on the Internet was doubling every six months.
Subscribe to Cryplogger news on Telegram: Cryplogger AI – all the news from the world of AI!
Found a mistake in the text? Select it and press CTRL + ENTER