Reading 4 min Views 3 Published Updated
Demis Hassabis, CEO of Google DeepMind, recently predicted that artificial intelligence systems will reach human levels of cognition sometime between “the next few years” and “maybe within a decade.”
Hassabis, who started his career in the gaming industry, co-founded Google DeepMind (formerly DeepMind Technologies), a company known for developing the AlphaGo artificial intelligence system responsible for beating the world’s best Go players.
In a recent interview conducted during The Wall Street Journal’s Future of Everything Festival, Hassabis told interviewer Chris Mims that he believes machines with human-level intelligence are inevitable:
“The progress over the past few years has been incredible. I see no reason why this progress will slow down. I think it might even speed up. So I think we could be in just a few years, maybe within ten years.”
These comments come just two weeks after an internal restructuring led to Google announcing the merger of “Google AI” and “DeepMind” into the aptly named “Google DeepMind”.
When asked to define “AGI” – artificial general intelligence – Hassabis answered: “cognition at the human level.”
There is currently no standardized definition, test, or benchmark for AGI that is widely accepted by the STEM community. There is also no single scientific consensus on whether AGI is possible at all.
Some well-known figures, such as Roger Penrose (a longtime research partner of Stephen Hawking), believe that AGI is impossible to create, while others believe that it will take decades or centuries for scientists and engineers to figure it out.
Among those optimistic about AGI in the near term, or some similar form of human-level artificial intelligence, are Elon Musk and OpenAI CEO Sam Altman.
Don’t Look Up … but AGI instead of comet
— Elon Musk (@elonmusk) April 1, 2023
AGI has become a hot topic since the launch of ChatGPT and a host of similar AI products and services over the past few months. Experts predict that human-level artificial intelligence, often referred to as “holy grail” technology, will destroy every aspect of life on Earth.
If human-level artificial intelligence is ever created, it could disrupt various aspects of the cryptocurrency industry. In the world of cryptocurrencies, we have seen fully autonomous machines capable of acting like entrepreneurs, CEOs, advisors and traders with human intelligence and the ability to store information and execute computer system code.
As to whether AGI agents will serve us as AI-powered tools or compete with us for resources remains to be seen.
For his part, Hassabis didn’t make any assumptions, but told The Wall Street Journal that he “would advocate developing these types of AAI technologies with care, using the scientific method, where you’re trying to do very rigorous controlled experiments to understand what the underlying system is doing.” .
This may conflict with the current situation, where products such as his own employer’s Google Bard and OpenAI’s ChatGPT have recently become available for general use.
Related: ‘Godfather of AI’ Quits Google, Warning of Dangers of AI
Industry insiders such as OpenAI CEO Sam Altman and DeepMind’s Nando de Freitas have said they believe AGI could emerge on its own if developers continue to scale current models. And one Google researcher recently parted ways with the company, saying that a model called LaMDA has already become reasonable.
Solving these scaling challenges is what will deliver AGI. Research focused on these problems, eg S4 for greater memory, is needed. Philosophy about symbols isn’t. Symbols are tools in the world and big nets have no issue creating them and manipulating them 2/n
— Nando de Freitas ️ (@NandoDF) May 14, 2022
Due to the uncertainty surrounding the development of these technologies and their potential impact on humanity, thousands of people, including Elon Musk and the co-founder of Apple Inc. Steve Wozniak, recently signed an open letter asking companies and individuals building related systems to pause development for six months so scientists can assess the potential harm.