Reading 2 minutes Views 3 Published Updated
European Union officials have discussed additional measures that will make artificial intelligence (AI) tools like OpenAI’s ChatGPT more transparent to the public.
On June 5, European Commission deputy head Vera Yurova told the press that companies deploying generative artificial intelligence tools with “the potential to create misinformation” should place labels on their content in an effort to combat “fake news”.
“Subscribing parties that have services capable of spreading AI-generated misinformation should, in turn, implement technology to recognize such content and clearly label it to users.”
Yurova also cited companies that are integrating generative AI into services such as Microsoft’s Bingchat and Google’s Bard, which need to be “protected” so that attackers can’t use them for disinformation purposes.
In 2018, the EU developed its “Code of Practice on Disinformation” which acts as both an agreement and a tool for tech industry players on self-regulatory standards to combat disinformation.
Related: OpenAI receives warning from Japanese regulators about data collection
Major tech companies including Google and Microsoft as well as Meta Platforms have already signed up to the EU Code of Practice. Yurova said these and other companies should announce new AI-related security measures in July this year.
She also stressed that a week before her last press conference, Twitter had exited the Code and should expect greater regulatory scrutiny.
“By exiting the Code, Twitter has attracted a lot of attention and its actions and compliance with EU law will be subject to a thorough and urgent scrutiny.”
These deputy chief statements came as the EU prepares its upcoming EU AI Act, which will be a comprehensive set of guidelines for the public use of artificial intelligence and companies implementing it.
Despite official laws set to come into force within the next two to three years, European officials have called for the creation of a voluntary code of conduct for developers of generative AI in the meantime.