Reading 3 min Views 4 Published Updated
An adviser to the UK Prime Minister’s AI Task Force said people have roughly two years to control and regulate artificial intelligence (AI) before it becomes too powerful.
In an interview with local British media, Matt Clifford, who is also chairman of the government’s Advanced Research and Invention Agency (ARIA), stressed that modern systems are becoming “more and more capable at an ever-increasing rate”.
He went on to say that if officials don’t start thinking about safety and regulations now, the systems will be “very powerful” in two years.
“We have two years to put in place a framework that will make the control and regulation of these very large models much more possible than it is today.”
Clifford warned that when it comes to AI, there are “many different types of risks,” both short-term and long-term, which he called “pretty scary.”
The interview follows a letter released by the AI Security Center last week that was signed by 350 AI experts, including the CEO of OpenAI, saying that AI should be seen as an existential threat, similar to that posed by nuclear weapons and pandemics.
“They talk about what happens when we effectively create a new species, a kind of intelligence that is superior to the human.”
An adviser to the AI task force said these AI threats could be “very dangerous” that could “kill a lot of people, not all people, just from where we expect the models to be in two years” .
Related: AI-linked cryptocurrency returns up to 41% after ChatGPT launch: study
According to Clifford, the main focus of regulators and developers should be on understanding how to manage the models and then implementing the rules on a global scale.
At the moment, he says, his biggest fear is not understanding why AI models behave the way they do.
“People who create the most advanced systems openly admit that they do not understand exactly how [системы ИИ] demonstrate the behavior they do.”
Clifford emphasized that many AI leaders also agree that powerful AI models should go through some sort of audit and evaluation process before being deployed.
Currently, regulators around the world are struggling to understand the technology and its ramifications, trying to create rules that protect users while still allowing innovation.
On June 5, European Union officials went so far as to propose requiring all AI-generated content to be labeled as such to prevent misinformation.
In the UK, an opposition minister echoed the sentiment mentioned in the CAIS letter, saying that technology should be regulated in the same way as medicine and nuclear power.