- Countries should adopt regulation as soon as possible
- And developers should take responsibility for fakes and security issues
Microsoft President Brad Smith urged governments to accelerate with the regulation of artificial intelligence.
He considers fakes to be the biggest problem of neural networks.
“We will have to solve problems related to deepfake. We will have to pay attention, in particular, to the fact that we are concerned about the growth of foreign special operations for cyber influence. These types of activities are increasingly being carried out by the governments of Russia, China, Iran.”
By the way, deepfakes have really become very believable lately. One of these videos was shown yesterday by the director of security at Binance. The hero of the video generated by the neural network was the head of the crypto exchange Changpeng Zhao. And it is almost impossible to distinguish him from a “living real” person.
But back to Microsoft’s plan. Brad Smith called for the development of regulation not only for AI, but also for the producers of such software. Developer companies must commit themselves to protecting physical security, cybersecurity, and national security.
Yesterday Microsoft proposed a plan out of 5 points on AI management.

It describes approximate rules for regulating neural networks in the public and private sectors, possible problems and ways to solve them.
Note that Microsoft is developing its own AI chip, codenamed “Athena”.