What is Artificial Intelligence?
Artificial intelligence (AI) is a vast branch of computer science that focuses on building smart machines capable of performing intelligent tasks.
To date, there are many approaches to the creation of algorithms. Advances in the fields of machine and deep learning over the past few years have significantly changed the technology industry.
What are the definitions of artificial intelligence?
The fundamental purpose and vision of artificial intelligence was established by the English mathematician Alan Turing in the article “Computers and Mind”, published in 1950. He asked a simple question: “Can machines think?” At the same time, the scientist proposed the famous test, named after him.
At its core, AI is a branch of computer science that seeks to answer Turing’s question in the affirmative. It is an attempt to reproduce or simulate human intelligence in machines.
The global goal of artificial intelligence still raises many questions and disputes. The main limitation in defining AI as simply “intelligent machines” is that neither scientists nor philosophers can explain what artificial intelligence is or what exactly makes a machine smart.
Scientists and authors of the textbook “Artificial Intelligence: A Modern Approach” Stuart Russell and Peter Norvig combined their work around the topic of intelligent agents in machines and defined AI as “the study of agents that receive perception from the environment and perform actions.”
During a speech at the Japan AI Experience in 2017, DataRobot CEO Jeremy Achin began his speech with the following definition of how artificial intelligence is used today:
“AI is a computer system capable of performing tasks that require human intelligence … Many of these systems are machine learning-based, others are deep learning-based, and some of them are based on very boring things like rules.”
While these definitions may seem abstract, they help define the main strands of theoretical research in computer science and provide concrete ways to implement AI programs to solve applied problems.
What is turing’s merit to the development of AI?
In the middle of the last century, Alan Turing laid a theoretical foundation that was ahead of its time and formed the basis of modern computer science, for which he was nicknamed the “father of computer science”.
In 1936, the scientist created an abstract computer – the so-called Turing machine – an important component of the theory of algorithms, which formed the basis of modern computers. In theory, such a machine can solve any algorithmic problem.
In turn, if the algorithm can be run on a Turing machine, then the programming language used to create it will have a “Turing completeness” on which any algorithm can be written. For example, C# has such completeness, but html does not.
Also, the name of the mathematician is given to a thought test that is not related to the machine, but is directly related to artificial intelligence – the Turing test. In the scientific community, it is believed that as soon as the machine passes this test, it will be possible to fully talk about the emergence of intelligent machines.
The essence of the game is that a person with the help of text correspondence interacts simultaneously with a machine and another person. The task of the computer is to mislead the test participant and convincingly impersononity for a person.
What kind of AI is there?
Artificial intelligence is usually divided into two broad categories:
- Weak AI: This kind of artificial intelligence, sometimes referred to as “Narrow AI,” operates in a limited context and is an imitation of human intelligence. Weak AI is often focused on performing a very good task. And, while these machines may seem smart, they work with great limitations.
- General Artificial Intelligence (AGI): AGI, sometimes referred to as “Strong AI,” is a kind of artificial intelligence we see in movies like the robots from Westworld or Joy’s hologram from Blade Runner 2049. AGI is a machine with general intelligence that, like a human, can use it to solve any problem.
What is weak artificial intelligence?
Weak AI surrounds us everywhere, and to date, it is the most successful implementation of artificial intelligence.
Task-oriented, he has made many breakthroughs over the past decade that have brought “significant societal benefits and contributed to the economic viability of the nation,” according to the report “Preparing for the Future of Artificial Intelligence, publishedby the Obama administration in 2016.
Here are a few examples of weak AI:
- Google search
- Image recognition software;
- Siri, Alexa and other voice assistants;
- Unmanned vehicles;
- Netflix and Spotify recommendation systems;
- IBM Watson.
How does weak AI work?
Much of the weak AI is based on advances in machine learning and deep learning. The similarity of these concepts can be confusing, but they should be distinguished. Venture capitalist Frank Chen proposed the following definition:
“Artificial intelligence is a set of algorithms that try to mimic human intelligence. Machine learning is one of them, and deep learning is one of the methods of machine learning.”
In other words, machine learning supplies a computer with data and uses statistical techniques to help it learn how to perform tasks without being specifically programmed for them, eliminating the need for millions of lines of written code. Popular types of machine learning are teacher-trained learning (using labeled datasets), unsupported learning (using unlabeled datasets), and reinforcement learning.
Deep learning is a type of machine learning in which input data is processed through a neural network architecture based on biological principles.
Neural networks contain a series of hidden layers through which data is processed, allowing the machine to “delve deeper” into its training, make connections, and weigh input for best results.
What is machine learning?
Artificial intelligence and machine learning are not the same thing. Machine learning is just one subsection of AI.
The most common types of machine learning are teacher-based, unsupported, and reinforcement.
Teacher-teacher training is used when developers have a tagged dataset and they know exactly what signs the algorithm should look for.
As a rule, it is divided into two categories: classification and regression.
Classification is used in cases where it is necessary to attribute objects to pre-known classes. This type of training is used in spam filters, language detection or detection of suspicious transactions.
Regression is used when it is necessary to correlate an object with a time line, for example, to predict the value of securities, demand for goods or make medical diagnoses.
Unschooled education is a less popular type of ML because of its unpredictability. Algorithms are trained on unpartitioned data and they need to independently find signs and patterns. Often used for clustering, dimensionality reduction, and association search.
Clustering is like a classification, but without known classes. The algorithm itself must find signs of similarity in objects and combine them into clusters. Used to analyze and mark up new data, compress images, or combine labels on a map.
Dimensionality reduction – summarizes specific features in a higher-level abstraction. It is often used to determine the subject of texts or to create recommendation systems.
Associations have found their application in marketing, for example, when compiling promotions and sales or analyzing user behavior on the site. It can also be used to create a recommendation system.
Reinforcement learning is the training of an agent to survive in the environment in which he exists. The medium can be anything from a video game to the real world.
For example, there are algorithms that play Super Mario no worse than people, and in the real world, the autopilot in Tesla cars or the robot vacuum cleaner do everything to do everything to do around obstacles in their path.
Reinforcement training provides a reward for the agent for the correct action and punishment for mistakes. The algorithm does not need to memorize all its previous experience and calculate all possible scenarios. He must learn to act on the situation.
Remember when the machine beat the man in Go? Long before that, scientists had established that there are more variations of moves in this game than atoms in the universe. Not a single computer program from the existing ones would be able to calculate all the options for the development of the party. However, AlphaGo, Google’s algorithm, coped with this task, not calculating all the moves in advance, but acting according to the circumstances, doing it with incredibly high accuracy.
What are Neural Networks and Deep Learning?
The concept of artificial neural networks is not new. For the first time this concept was formulated by American scientists Warren McCulloq and Walter Pitts in 1943.
Any neural network consists of neurons and the connections between them. A neuron is a function that has many inputs and one output. They exchange information among themselves through communication channels, each of which has a certain weight.
Weight is a parameter that determines the strength of the connection between neurons. The neuron itself does not understand what it sends, so weight is necessary in order to regulate which inputs to respond to and which not.
For example, if a neuron sends the number 50, and the weight of the connection is specified as 0.1, the result is 5.
As the architecture of neural networks became more complex, neurons decided to communicate not in any way, but in layers. Inside the layer, neurons do not interact with each other in any way, but receive and transmit information from the previous layer to the next.
As a rule, the more layers in the neural network, the more complex and accurate the model. But then, 50 years ago, researchers ran into the limitations of computing power. As a result, the technology turned out to be a disappointment and it was forgotten for many years.
It was remembered in 2012 – students of the University of Toronto Alex Krizhevsky, Ilya Satskever and Jeff Hinton won the ImageNet computer vision competition. They used a convolutional neural network to classify images, which had an error rate of 15.3%, which is more than 10% lower than that of the second-place team. The deep learning revolution has been largely due to the development of graphics cards.
Deep learning differs from neural networks only in the methods of training networks of large sizes. In practice, as a rule, developers do not find out which network can be considered deep and which can not. Today, even to build networks into five layers, developers use “deep” libraries, such as Keras, TensorFlow or PyTorch.
To date, the most popular networks are convolutional neural networks (CNN) and recurrent neural networks (RNN).
CNN is often used for facial recognition, searching for objects in photos and videos, improving image quality and other tasks. Recurrent networks have found application in machine translation of text and speech synthesis. For example, since 2016, Google Translate has been operating on the basis of the RNN architecture.
Generative adversarial networks (GAN) have also found popularity. It is based on two neural networks, one of which generates data, for example, an image, and the second tries to distinguish the correct samples from the wrong ones. Since the two networks compete with each other, an antagonistic gamearises between them.
GAN is often used to create photorealistic photos. For example, the this Person Does Not Exist image repository consists of portrait photos of “people” created by a generative neural network.
What is General Artificial Intelligence?
Creating a human-level intelligence machine that can be applied to any task is the Holy Grail for many AI researchers, but finding AGI comes with some difficulties.
General AI has long been the muse of dystopian science fiction, in which super-minded robots flood humanity, but experts agree that this is not something we need to worry about anytime soon.
American inventor and futurologist Ray Kurzweil predicted that a common AI will appear by 2029. His colleague Rodney Brooks is not so optimistic, and is confident that the turning point in the development of machine intelligence technologies will occur by 2300.
Stuart Russell, one of the authors of the textbook “Artificial Intelligence: A Modern Approach”, suggests that the invention of AGI will become accidental, as, for example, the discovery of nuclear energy in 1933. The scientist believes that this is a vivid example of how pointless it is to give any forecasts in the development of such an unpredictable technology, which has not yet been fully studied.