
Microsoft President Brad Smith expressed concern about the widespread dissemination of realistic-looking but false content. Writes about it The Guardian.
During a speech in Washington on AI regulation, Smith called for action to recognize deepfakes.
“We will have to solve the problems associated with deepfakes. We will have to pay special attention to the fact that most foreign cyber-influence operations are already carried out by the Russian government, the Chinese, the Iranians,” he said.
Smith believes steps need to be taken now to protect legitimate content from being altered by AI to deceive people. He also called for the licensing of the most important forms of artificial intelligence, with “obligations to protect security.”
“We will need a new generation of export controls […] to ensure that our models are not stolen or misused,” Smith said.
Synchronization with OpenAI
Last week, the head of OpenAI, Sam Altman, appeared before the US Congress, where he called for the regulation of AI. The CEO of the company also talked about the possible licensing of the most important technologies so that the developers meet certain criteria.
After that, Altman went to Europe, where he met with the leaders of Great Britain, France, Germany and other countries. There, he stated that OpenAI would have to leave the EU market if the company could not meet the “hard rules” of the AI law.
Many experts and opinion leaders in the field of artificial intelligence support the intention to regulate the technology. In March, they signed an open letter calling for a halt to experiments with large language models.
However, the initiative has faced criticism. Microsoft founder Bill Gates believes that a pause will not help “pacify” AI.
According to the billionaire, it is becoming increasingly difficult to slow down artificial intelligence on a global scale. Instead, the world should focus on using technology for the good of society.
Altman supported only some of the theses of the open letter.
Microsoft’s efforts to fight deepfakes
At the Build 2023 conference, Microsoft introduced several services to combat deepfakes. In the coming months, the company will launch a C2PA specification to validate images and videos for Bing Image Creator or Designer.
Microsoft also introduced the Azure AI Content Safety content moderation tool. The service is trained to detect toxic pictures and texts generated by both humans and artificial intelligence.
Models work in several languages. They assign a risk level to the content and notify the moderator to take action.
Recall that in May, the former head of Google, Eric Schmidt, called AI an “existential threat” because of which many people could “suffer or die.”
That same month, billionaire investor Warren Buffett likened artificial intelligence to building the atomic bomb and expressed dismay at the rapid advancement of technology.
Found a mistake in the text? Select it and press CTRL+ENTER
Cryplogger Newsletters: Keep your finger on the pulse of the bitcoin industry!

Microsoft President Brad Smith expressed concern about the widespread dissemination of realistic-looking but false content. Writes about it The Guardian.
During a speech in Washington on AI regulation, Smith called for action to recognize deepfakes.
“We will have to solve the problems associated with deepfakes. We will have to pay special attention to the fact that most foreign cyber-influence operations are already carried out by the Russian government, the Chinese, the Iranians,” he said.
Smith believes steps need to be taken now to protect legitimate content from being altered by AI to deceive people. He also called for the licensing of the most important forms of artificial intelligence, with “obligations to protect security.”
“We will need a new generation of export controls […] to ensure that our models are not stolen or misused,” Smith said.
Synchronization with OpenAI
Last week, the head of OpenAI, Sam Altman, appeared before the US Congress, where he called for the regulation of AI. The CEO of the company also talked about the possible licensing of the most important technologies so that the developers meet certain criteria.
After that, Altman went to Europe, where he met with the leaders of Great Britain, France, Germany and other countries. There, he stated that OpenAI would have to leave the EU market if the company could not meet the “hard rules” of the AI law.
Many experts and opinion leaders in the field of artificial intelligence support the intention to regulate the technology. In March, they signed an open letter calling for a halt to experiments with large language models.
However, the initiative has faced criticism. Microsoft founder Bill Gates believes that a pause will not help “pacify” AI.
According to the billionaire, it is becoming increasingly difficult to slow down artificial intelligence on a global scale. Instead, the world should focus on using technology for the good of society.
Altman supported only some of the theses of the open letter.
Microsoft’s efforts to fight deepfakes
At the Build 2023 conference, Microsoft introduced several services to combat deepfakes. In the coming months, the company will launch a C2PA specification to validate images and videos for Bing Image Creator or Designer.
Microsoft also introduced the Azure AI Content Safety content moderation tool. The service is trained to detect toxic pictures and texts generated by both humans and artificial intelligence.
Models work in several languages. They assign a risk level to the content and notify the moderator to take action.
Recall that in May, the former head of Google, Eric Schmidt, called AI an “existential threat” because of which many people could “suffer or die.”
That same month, billionaire investor Warren Buffett likened artificial intelligence to building the atomic bomb and expressed dismay at the rapid advancement of technology.
Found a mistake in the text? Select it and press CTRL+ENTER
Cryplogger Newsletters: Keep your finger on the pulse of the bitcoin industry!