What is artificial intelligence made of. What should every engineer know about artificial intelligence? Why artificial intelligence is good
Since the invention of computers, their ability to perform various tasks has continued to grow exponentially. Humans are developing the power of computer systems by increasing the performance of tasks and decreasing the size of computers. The main goal of researchers in the field of artificial intelligence is to create computers or machines as intelligent as a person.
The author of the term "artificial intelligence" is John McCarthy, the inventor of the Lisp language, the founder of functional programming and the winner of the Turing Award for his great contribution to the field of artificial intelligence research.
Artificial intelligence is a way to make a computer, computer-controlled robot or program capable of thinking intelligently like a human as well.
Research in the field of AI is carried out by studying the mental abilities of a person, and then the results of this research are used as the basis for the development of intelligent programs and systems.
Philosophy of AI
During the operation of powerful computer systems, everyone asked the question: “Can a machine think and behave in the same way as a person? ".
Thus, the development of AI began with the intention of creating a similar intelligence in machines, similar to the human.
Main goals of AI
- Creation of expert systems - systems that demonstrate intelligent behavior: learn, show, explain and give advice;
- Realization of human intelligence in machines - the creation of a machine capable of understanding, thinking, teaching and behaving like a human.
What contributes to the development of AI?
Artificial intelligence is a science and technology based on such disciplines as computer science, biology, psychology, linguistics, mathematics, mechanical engineering. One of the main areas of artificial intelligence is the development of computer functions related to human intelligence, such as: reasoning, learning and problem solving.
Program with AI and without AI
Programs with and without AI differ in the following properties:
Applications with AI
AI has become dominant in various fields such as:
Games - AI plays a crucial role in strategy games such as chess, poker, tic-tac-toe, etc., where the computer is able to calculate a large number of possible solutions based on heuristic knowledge.
Natural language processing is the ability to communicate with a computer that understands the natural language spoken by humans.
Speech recognition - some intelligent systems able to hear and understand the language in which a person communicates with them. They can handle various accents, slang, etc.
Handwriting Recognition - The software reads text written on paper with a pen or on a screen with a stylus. It can recognize letter shapes and convert it into editable text.
Smart robots are robots capable of performing tasks assigned by humans. They have sensors to detect physical data from the real world, such as light, heat, motion, sound, shock, and pressure. They have high performance processors, multiple sensors and huge memory. In addition, they are able to learn from their own mistakes and adapt to the new environment.
History of AI development
Here is the history of AI development during the 20th century
Karel Capek is directing a play in London called "Universal Robots", the first use of the word "robot" in English. |
|
Isaac Asimov, a graduate of Columbia University, coined the term robotics. |
|
Alan Turing develops the Turing test to measure intelligence. Claude Shannon publishes detailed analysis intellectual chess game. |
|
John McCarthy coined the term artificial intelligence. Demonstration of the first launch of an AI program at Carnegie Mellon University. |
|
John McCarthy invents the lisp programming language for AI. |
|
Danny Bobrov's dissertation at MIT shows that computers can understand natural language quite well. |
|
Joseph Weizenbaum at MIT is developing Eliza, an interactive assistant that communicates in English. |
|
Scientists at the Stanford Research Institute have developed Sheki, a motorized robot capable of perceiving and solving some problems. |
|
A team of researchers at the University of Edinburgh built Freddie, the famous Scottish robot that can use its eyesight to find and assemble models. |
|
The first computer-controlled autonomous vehicle, the Stanford Cart, was built. |
|
Harold Cohen developed and demonstrated programming, Aaron. |
|
A chess program that beats world chess champion Garry Kasparov. |
|
Interactive robotic pets will become commercially available. MIT displays Kismet, a robot with a face that expresses emotions. Robot Nomad explores remote areas of Antarctica and finds meteorites. |
The issue of artificial intelligence began to be dealt with in the middle of the twentieth century. But many still imagine the conquest of galaxies, the rise of machines and other science fiction films when they hear about artificial intelligence. Meanwhile, artificial intelligence technologies are already being used in Everyday life. Thanks to these technologies, machines are able to solve more and more tasks, faster and better. Especially if for this you need to process large amounts of data: artificial intelligence solves such problems much more efficiently than a person. Some believe that this trend threatens many with the loss of a job: according to research Oxford Martin School, by 2033, technology will fully automate 47% of jobs. the site talks about what artificial intelligence is, how it works and what are the prospects for its application in the future.
What is artificial intelligence
Artificial intelligence (AI) is the science and technology of creating computer algorithms and programs that function like intelligent systems: learn and retain information based on experience, evaluate and apply abstract concepts, and use acquired knowledge to influence the environment.
Artificial intelligence is divided into 2 types: weak and strong. Weak AI is also called narrow AI because it can only perform tasks within certain limits. These are all developments based on AI technology that exist today. Strong artificial intelligence will be able to solve any problems in an unlimited range of areas. To imagine strong AI, think of Jarvis, Tony Stark's sidekick in Iron Man. Today, such AI is impossible to implement, and the very idea of creating it is recognized as a pure utopia.
Dina Lee Special for the site
Artificial Intelligence Today: Neural Networks and Machine Learning
AI technology can be implemented in different ways. One way is neural networks. A neural network is built on the same principle as nerve networks in a living organism, hence the name. In the body, nerve cells - neurons - are connected to a network, they form nervous system. And in an artificial neural network, simple processors are used - computing elements that are connected and interact in the same way.
Unlike conventional algorithms, neural networks are able to learn from experience. Neural networks analyze and identify connections between input and output data, generalize data and form solutions to problems. In order for neural networks to function in this way, machine learning methods are used. Moreover, in the case of neural networks, such training requires a lot of computing resources.
What you can teach the neural network depends on the input data. The more data, the better the training will be. You can teach a neural network to distinguish one object from another, compare and predict. Learning a neural network is like teaching children when they are shown a picture and told, "It's a cat." In the case of neural networks, they get a lot of such pictures with explanatory labels and learn to recognize individual elements, which they can then combine. The input image falls into a certain filtering system. The filters in it are different in size and complexity of elements that can be recognized - each has its own set of features. The image is repeatedly filtered in this system. When many elements are recognized, the neural network makes a prediction: with such and such a probability this object is a person.
This is how neural networks appeared that predict stock prices for tomorrow, recognize handwritten index numbers on a mail envelope, and identify a diseased organ in a picture. For their training, numerical data on exchange rates and images of written numbers, diseased and healthy organs were used.
The problem was that neural networks were often wrong, because it was difficult to collect really large data samples for training. In 2010, the ImageNet image database appeared: 15 million images in 22,000 categories. Access was open: any researcher could use the data. As a result, it became possible to train AI with high quality. Neural networks have become more developed, accessible and firmly integrated into everyday life.
Artificial intelligence that we encounter in everyday life
Voice assistants Siri, Google Assistant and Alice, website recommendation algorithms such as Brain, which uses Youtube to recommend videos or Amazon's recommended product block, chatbots, all are powered by AI.
The PayPal payment system uses machine learning to help neural networks find suspicious transactions. This allows the company to reduce the incidence of fraud. The Russian app Prisma uses neural networks to process photos.
NVIDIA engineer Robert Bond developed an algorithm that turned on his garden sprinklers when the neighbor's cats wandered over and ruined his garden. To determine that it was a cat, he used a system based on the Caffe neural network: it identified cats from video footage from cameras. When the camera recorded a change in the situation, it took 7 photos. The photos were analyzed by a neural network: if there was a cat in the pictures, then the network turned on sprinklers.
In addition, neural networks have written 2 music albums that you can listen to on Yandex.Music. One was written on the basis of the songs of the Civil Defense group (the performer is"Neural Defense" ), and the other - based on "Nirvana" (performer - neuron).
Dina Lee Special for the site
In what areas can you still use neural networks
Neural networks are used in medicine, finance and commerce, industry and ensuring order and security, wherever it is required to process large amounts of data, systematize and predict.
In medicine, neural networks are trained to recognize tumors, damage to tissues and organs after injuries, and to predict possible complications and the course of a disease. This is not easy: there is no large enough medical database, but you need to achieve high accuracy. After all, if the neural network confuses a cat with a dog, then this is not so scary. But if a healthy organ is with a diseased one, it will be bad.
At the professional conference of developers of highly loaded systems HighLoad ++ Natalia Efremova told about the non-standard use of neural networks to predict the level of poverty. The poverty rate in Africa is so high that it is not possible to simply collect and analyze this data. The latest data was collected in 2005. Scientists from Stanford University first trained a neural network using the ImageNet image database so that it could recognize settlements. They then collected many daytime and nighttime satellite images of Africa and uploaded them to the neural network. The neural network assessed whether the population has the money to light their homes at night and made a prediction of their poverty level. The forecast was then compared with real data for 2005 - the neural network made a fairly accurate forecast.
Why neural networks are waiting for a new round of development
There is more computing power, as well as images and other databases for training neural networks. In addition, it turned out that neural networks are capable of greater efficiency. When Stanford scientists were training a neural network to predict poverty in Africa, they loaded data on the rooftops of settlements. But the neural network independently learned to recognize water, forests, roads and other objects - without preloaded databases and the intervention of teachers.
In May 2017, developers from Google Brain submitted the AutoML project, which independently designs machine learning models. Simply put, this is an AI that analyzed existing neural networks, identified effective sides and created another neural network without human intervention - NASNet . On the validation set of images, NASNet showed a prediction accuracy of 82.7%. This figure is higher than all earlier neural networks with image recognition.
Will AI take jobs away from people?
The development of AI will inevitably affect the labor market. But this should not be surprising, because in fact it is the same as modernization and automation. Some professions will disappear and new ones will appear, because the development of AI will affect the development of other areas.
Now there is a list of professions that, presumably, artificial intelligence, neural networks and chat bots can take away from a person. For example, Google is investing in robots that write news without human intervention. Some types of programmers may also be left without work in the future: we are talking primarily about "coders" who are engaged in assembling ready-made blocks, that is, their work can be reduced to an algorithm. The same goes for, for example, HR specialists: neural networks can cover many more sources of information in order to search for candidates, systematize them according to certain criteria and send them notifications. Call-center operators are also under the threat of extinction: a lot of standard work that can be automated falls on their shoulders.
At the same time, the development of AI raises concerns. One of the main inventors of our time and the founder of SpaceX and Tesla, Elon Musk, called artificial intelligence "the biggest risk that humanity faces as a civilization." As companies race for more advanced technologies, he says, they can forget about the dangers that come from artificial intelligence. Also ambiguously evaluates artificial intelligence and Stephen Hawking. The scientist fears that it can lead to the degradation of man, making him helpless in the face of nature.
At the moment, it is difficult to predict the exact horizons that AI will be able to reach. But today we know two important things: some work cannot be done without human intervention, and a perfect AI that controls everything is still a fantasy.
Artificial intelligence
Artificial intelligence is a branch of computer science that studies the possibility of providing reasonable reasoning and actions with the help of computer systems and other artificial devices. In most cases, the algorithm for solving the problem is not known in advance.
The exact definition of this science does not exist, since the question of the nature and status of the human intellect has not been resolved in philosophy. There is no exact criterion for achieving “intelligence” by computers, although at the dawn of artificial intelligence a number of hypotheses were proposed, for example, the Turing test or the Newell-Simon hypothesis. At the moment, there are many approaches to both understanding the task of AI and creating intelligent systems.
So, one of the classifications distinguishes two approaches to the development of AI:
top-down, semiotic - the creation of symbolic systems that model high-level mental processes: thinking, reasoning, speech, emotions, creativity, etc.;
bottom-up, biological - the study of neural networks and evolutionary calculations that model intelligent behavior based on smaller "non-intelligent" elements.
This science is connected with psychology, neurophysiology, transhumanism and others. Like all computer sciences, it uses a mathematical apparatus. Philosophy and robotics are of particular importance to her.
Artificial intelligence is a very young field of research that was launched in 1956. Its historical path resembles a sinusoid, each "rise" of which was initiated by some new idea. At the moment, its development is on the decline, giving way to the application of already achieved results in other areas of science, industry, business, and even everyday life.
Study Approaches
There are various approaches to building AI systems. At the moment, there are 4 quite different approaches:
1. Logical approach. The basis for the logical approach is Boolean algebra. Every programmer is familiar with it and with logical operators since when he mastered the IF statement. Boolean algebra received its further development in the form of predicate calculus - in which it is expanded by introducing subject symbols, relations between them, quantifiers of existence and universality. Virtually every AI system built on a logical principle is a theorem proving machine. In this case, the initial data is stored in the database in the form of axioms, the rules of inference as the relationship between them. In addition, each such machine has a goal generation block, and the inference system tries to prove the given goal as a theorem. If the goal is proved, then the tracing of the applied rules allows you to get a chain of actions necessary to achieve the goal (such a system is known as expert systems). The power of such a system is determined by the capabilities of the goal generator and the theorem proving machine. To achieve greater expressiveness of the logical approach allows such a relatively new direction as fuzzy logic. Its main difference is that the truthfulness of the statement can take in it, in addition to yes / no (1/0), also intermediate values - I don’t know (0.5), the patient is more likely alive than dead (0.75), the patient is more likely dead than alive ( 0.25). This approach is more like human thinking, since it rarely answers questions with only yes or no.
2. By structural approach, we mean here attempts to build AI by modeling the structure of the human brain. One of the first such attempts was Frank Rosenblatt's perceptron. The main modeled structural unit in perceptrons (as in most other brain modeling options) is a neuron. Later, other models arose, which are known to most under the term neural networks (NNs). These models differ in the structure of individual neurons, in the topology of connections between them, and in learning algorithms. Among the most well-known variants of NN now are back-propagation NN, Hopfield networks, stochastic neural networks. In a broader sense, this approach is known as Connectivism.
3. Evolutionary approach. When building AI systems according to this approach, the main attention is paid to the construction of the initial model, and the rules by which it can change (evolve). Moreover, the model can be compiled using a variety of methods, it can be both NN and a set logical rules and any other model. After that, we turn on the computer and, on the basis of checking the models, it selects the best of them, on the basis of which new models are generated according to a variety of rules. Among evolutionary algorithms, the genetic algorithm is considered classical.
4. Simulation approach. This approach is classical for cybernetics, with one of its basic concepts being the black box. The object whose behavior is being simulated is just a "black box". It doesn’t matter to us what it and the model have inside and how it functions, the main thing is that our model behaves in the same way in similar situations. Thus, another property of a person is modeled here - the ability to copy what others do, without going into details why this is necessary. Often this ability saves him a lot of time, especially at the beginning of his life.
Within the framework of hybrid intelligent systems, they are trying to combine these areas. Expert inference rules can be generated by neural networks, and generative rules are obtained using statistical learning.
A promising new approach, called intelligence amplification, sees the achievement of AI through evolutionary development as a side effect of technology amplifying human intelligence.
Research directions
Analyzing the history of AI, one can single out such an extensive area as reasoning modeling. For many years, the development of this science has moved along this path, and now it is one of the most developed areas in modern AI. Reasoning modeling involves the creation of symbolic systems, at the input of which a certain task is set, and at the output it is required to solve it. As a rule, the proposed problem has already been formalized, i.e. translated into a mathematical form, but either does not have a solution algorithm, or it is too complicated, time-consuming, etc. This area includes: theorem proving, decision making and game theory, planning and dispatching, forecasting.
An important area is natural language processing, which analyzes the possibilities of understanding, processing and generating texts in a "human" language. In particular, the problem of machine translation of texts from one language to another has not been solved yet. In the modern world, the development of information retrieval methods plays an important role. By its nature, the original Turing test is related to this direction.
According to many scientists, an important property of intelligence is the ability to learn. Thus, knowledge engineering comes to the fore, combining the tasks of obtaining knowledge from simple information, their systematization and use. Advances in this area affect almost every other area of AI research. Here, too, two important subdomains should be noted. The first of them - machine learning - concerns the process of independent acquisition of knowledge by an intelligent system in the course of its operation. The second is connected with the creation of expert systems - programs that use specialized knowledge bases to obtain reliable conclusions on any problem.
There are great and interesting achievements in the field of modeling biological systems. Strictly speaking, several independent directions can be included here. Neural networks are used to solve fuzzy and complex problems such as geometric shape recognition or object clustering. The genetic approach is based on the idea that an algorithm can become more efficient if it borrows better characteristics from other algorithms (“parents”). A relatively new approach, where the task is to create an autonomous program - an agent that interacts with the external environment, is called the agent approach. And if you properly force a lot of “not very intelligent” agents to interact together, then you can get “ant-like” intelligence.
The tasks of pattern recognition are already partially solved within the framework of other areas. This includes character recognition, handwriting, speech, text analysis. Special mention should be made of computer vision, which is related to machine learning and robotics.
In general, robotics and artificial intelligence are often associated with each other. The integration of these two sciences, the creation of intelligent robots, can be considered another direction of AI.
Machine creativity holds itself apart, due to the fact that the nature of human creativity is even less studied than the nature of intelligence. Nevertheless, this area exists, and here the problems of writing music, literary works (often poems or fairy tales), artistic creativity are posed.
Finally, there are many applications of artificial intelligence, each of which forms an almost independent direction. Examples include programming intelligence in computer games, non-linear control, intelligent security systems.
It can be seen that many areas of research overlap. This is true for any science. But in artificial intelligence, the relationship between seemingly different directions is especially strong, and this is due to the philosophical debate about strong and weak AI.
At the beginning of the 17th century, Rene Descartes suggested that the animal is some kind of complex mechanism, thereby formulating the mechanistic theory. In 1623, Wilhelm Schickard built the first mechanical digital computer, followed by the machines of Blaise Pascal (1643) and Leibniz (1671). Leibniz was also the first to describe the modern binary number system, although before him this system was periodically carried away by many great scientists. In the 19th century, Charles Babbage and Ada Lovelace worked on a programmable mechanical computer.
In 1910-1913. Bertrand Russell and A. N. Whitehead published Principia Mathematica, which revolutionized formal logic. In 1941, Konrad Zuse built the first working program-controlled computer. Warren McCulloch and Walter Pitts published A Logical Calculus of the Ideas Immanent in Nervous Activity in 1943, which laid the foundation for neural networks.
The current state of affairs
At the moment (2008) in the creation of artificial intelligence (in the original sense of the word, expert systems and chess programs do not belong here), there is a shortage of ideas. Almost all approaches have been tried, but not a single research group has approached the emergence of artificial intelligence.
Some of the most impressive civilian AI systems are:
Deep Blue - Defeated the world chess champion. (The Kasparov vs. supercomputer match did not bring satisfaction to either computer scientists or chess players, and the system was not recognized by Kasparov, although the original compact chess programs are an integral element of chess creativity. Then the IBM supercomputer line manifested itself in the brute force BluGene (molecular modeling) projects and the modeling of the pyramidal cell system in Swiss Blue Brain Center. This story is an example of the intricate and secret relationship between AI, business, and national strategic goals.)
Mycin was one of the early expert systems that could diagnose a small subset of diseases, often as accurately as doctors.
20q is an AI-inspired project inspired by the classic 20 Questions game. He became very popular after appearing on the Internet on the site 20q.net.
Speech recognition. Systems such as ViaVoice are capable of serving consumers.
Robots in the annual RoboCup tournament compete in a simplified form of football.
Application of AI
Banks apply artificial intelligence systems (AI) in insurance activities (actuarial mathematics) when playing on the stock exchange and managing property. In August 2001, robots beat humans in an impromptu trading competition (BBC News, 2001). Pattern recognition methods (including both more complex and specialized and neural networks) are widely used in optical and acoustic recognition (including text and speech), medical diagnostics, spam filters, air defense systems (target identification), and also to ensure a number of other national security tasks.
Computer game developers are forced to use AI of varying degrees of sophistication. Standard AI tasks in games are finding a path in 2D or 3D space, simulating the behavior of a combat unit, calculating the right economic strategy, and so on.
Perspectives on AI
There are two directions of AI development:
the first is to solve the problems associated with the approximation of specialized AI systems to human capabilities and their integration, which is implemented by human nature.
the second is the creation of Artificial Intelligence, which is the integration of already created AI systems into a single system capable of solving the problems of mankind.
Relationship with other sciences
Artificial intelligence is closely related to transhumanism. And together with neurophysiology and cognitive psychology, it forms a more general science called cognitology. Philosophy plays a separate role in artificial intelligence.
Philosophical questions
The science of "creating artificial intelligence" could not but attract the attention of philosophers. With the advent of the first intelligent systems, fundamental questions about man and knowledge, and partly about the world order, were raised. On the one hand, they are inextricably linked with this science, and on the other hand, they bring some chaos into it. Among AI researchers, there is still no dominant point of view on the criteria of intellectuality, the systematization of the goals and tasks to be solved, there is not even a strict definition of science.
Can a machine think?
The most heated debate in the philosophy of artificial intelligence is the question of the possibility of thinking the creations of human hands. The question "Can a machine think?", which prompted researchers to create the science of modeling the human mind, was posed by Alan Turing in 1950. The two main points of view on this issue are called the hypotheses of strong and weak artificial intelligence.
The term "strong artificial intelligence" was introduced by John Searle, and his approach is characterized by his own words:
“Moreover, such a program would not just be a model of the mind; in the literal sense of the word, it will itself be the mind, in the same sense in which the human mind is the mind.
In contrast, weak AI advocates prefer to view software as merely a tool for solving certain tasks that do not require the full range of human cognitive abilities.
In his "Chinese Room" thought experiment, John Searle shows that passing the Turing test is not a criterion for a machine to have a genuine thought process.
Thinking is the process of processing information stored in memory: analysis, synthesis and self-programming.
A similar position is taken by Roger Penrose, who, in his book The New Mind of a King, argues that it is impossible to obtain a thought process on the basis of formal systems.
There are different points of view on this issue. The analytical approach involves the analysis of higher nervous activity of a person to the lowest, indivisible level (the function of higher nervous activity, an elementary reaction to external stimuli (stimuli), irritation of synapses of a set of neurons connected by function) and the subsequent reproduction of these functions.
Some experts take the ability of a rational, motivated choice for intelligence, in the face of a lack of information. That is, that program of activity (not necessarily implemented on modern computers) is simply considered intellectual, which can choose from a certain set of alternatives, for example, where to go in the case of “you will go left ...”, “you will go right ...”, “you will go straight ...”
Science of knowledge
Also, epistemology is closely related to the problems of artificial intelligence - the science of knowledge within the framework of philosophy. Philosophers dealing with this problem solve questions similar to those solved by AI engineers about how best to represent and use knowledge and information.
Attitude towards AI in society
AI and religion
Among the followers of the Abrahamic religions, there are several points of view on the possibility of creating AI based on a structural approach.
According to one of them, the brain, the work of which the systems are trying to imitate, in their opinion, does not participate in the process of thinking, is not a source of consciousness and any other mental activity. Creating AI based on a structural approach is impossible.
In accordance with another point of view, the brain participates in the process of thinking, but in the form of a "transmitter" of information from the soul. The brain is responsible for such "simple" functions as unconditioned reflexes, reaction to pain, etc. The creation of AI based on a structural approach is possible if the system being designed can perform "transfer" functions.
Both positions do not correspond to the data of modern science, because. the concept of the soul is not considered modern science as a scientific category.
According to many Buddhists, AI is possible. So, spiritual leader the 14th Dalai Lama does not exclude the possibility of the existence of consciousness on a computer basis.
Raelites actively support developments in the field of artificial intelligence.
AI and science fiction
In science fiction literature, AI is most often portrayed as a force that is trying to overthrow the power of a human (Omnius, HAL 9000, Skynet, Colossus, The Matrix and a Replicant) or serving a humanoid (C-3PO, Data, KITT and KARR, Bicentennial Man). The inevitability of AI dominating the world out of control is disputed by science fiction writers such as Isaac Asimov and Kevin Warwick.
A curious vision of the future is presented in Turing's Choice by science fiction writer Harry Harrison and scientist Marvin Minsky. The authors talk about the loss of humanity in a person whose brain was implanted with a computer, and the acquisition of humanity by a machine with AI, in whose memory information from the human brain was copied.
Some science fiction writers, such as Vernor Vinge, have also speculated about the implications of AI, which is likely to bring dramatic changes to society. This period is called the technological singularity.
This year, Yandex launched the Alice voice assistant. The new service allows the user to listen to news and weather, get answers to questions and simply communicate with the bot. "Alice" sometimes cheeky, sometimes it seems almost reasonable and humanly sarcastic, but often she cannot figure out what she is being asked about, and sits in a puddle.
All this gave rise not only to a wave of jokes, but also to new round discussions about the development of artificial intelligence. News about what smart algorithms have achieved is coming almost every day today, and machine learning is called one of the most promising directions to which you can devote yourself.
To clarify the main questions about artificial intelligence, we talked with Sergey Markov, a specialist in artificial intelligence and machine learning methods, the author of one of the most powerful domestic chess programs SmarThink and the creator of the XXIII Century project.
Sergei Markov,
artificial intelligence specialist
Debunking myths about AI
So what is "artificial intelligence"?
The concept of "artificial intelligence" is somewhat unlucky. Initially originating in the scientific community, it eventually penetrated into science fiction literature, and through it into pop culture, where it underwent a number of changes, overgrown with many interpretations, and in the end was completely mystified.
That is why we often hear such statements from non-specialists as: “AI does not exist”, “AI cannot be created”. Lack of understanding of the essence of research conducted in the field of AI easily leads people to other extremes - for example, modern systems AI is credited with having consciousness, free will, and secret motives.
Let's try to separate the flies from the cutlets.
In science, artificial intelligence refers to systems designed to solve intellectual problems.
In turn, an intellectual task is a task that people solve with the help of their own intellect. Note that in this case, experts deliberately avoid defining the concept of "intelligence", because before the advent of AI systems, the only example of intelligence was the human intellect, and defining the concept of intelligence based on a single example is the same as trying to draw a straight line through a single point. There can be as many such lines as you like, which means that the debate about the concept of intelligence could be waged for centuries.
"strong" and "weak" artificial intelligence
AI systems are divided into two large groups.
Applied artificial intelligence(they also use the term "weak AI" or "narrow AI", in the English tradition - weak / applied / narrow AI) is an AI designed to solve any one intellectual task or a small number of them. This class includes systems for playing chess, go, image recognition, speech, decision-making on issuing or not issuing a bank loan, and so on.
As opposed to applied AI, the concept is introduced universal artificial intelligence(also "strong AI", in English - strong AI / Artificial General Intelligence) - that is, a hypothetical (so far) AI capable of solving any intellectual tasks.
Often people, not knowing the terminology, identify AI with strong AI, because of this, judgments in the spirit of “AI does not exist” arise.
Strong AI does not really exist yet. Virtually all of the advances we've seen in the last decade in the field of AI have been advances in applied systems. These successes cannot be underestimated, since applied systems in some cases are able to solve intellectual problems better than the universal human intelligence does.
I think you noticed that the concept of AI is quite broad. Let's say mental counting is also an intellectual task, which means that any calculating machine will be considered an AI system. What about accounts? Abacus? Antikythera mechanism? Indeed, all this is formal, although primitive, but AI systems. However, usually, calling some system an AI system, we thereby emphasize the complexity of the task solved by this system.
It is quite obvious that the division of intellectual tasks into simple and complex ones is very artificial, and our ideas about the complexity of certain tasks are gradually changing. The mechanical calculating machine was a marvel of technology in the 17th century, but today, people who have been confronted with much more complex mechanisms since childhood, it is no longer able to impress. When the game of cars in Go or car autopilots cease to surprise the public, there will certainly be people who will wince at the fact that someone will attribute such systems to AI.
"Robots-excellent students": about the ability of AI to learn
Another funny misconception is that AI systems must have the ability to self-learn. On the one hand, this is not at all an obligatory property of AI systems: there are many amazing systems that are not capable of self-learning, but, nevertheless, solve many problems better than the human brain. On the other hand, some people simply do not know that self-learning is a feature that many AI systems have acquired even more than fifty years ago.
When I wrote my first chess program in 1999, self-study was already a commonplace in this area - programs were able to memorize dangerous positions, adjust opening variations for themselves, adjust the style of play, adjusting to the opponent. Of course, those programs were still very far from Alpha Zero. However, even systems that learn behavior based on interactions with other systems in so-called “reinforcement learning” experiments already existed. However, for some inexplicable reason, some people still think that the ability to self-learn is the prerogative of the human intellect.
Machine learning, a whole scientific discipline, deals with the processes of teaching machines to solve certain problems.
There are two big poles of machine learning - supervised learning and unsupervised learning.
At learning with a teacher the machine already has a number of conditionally correct solutions for some set of cases. The task of learning in this case is to teach the machine, based on the available examples, to make the right decisions in other, unknown situations.
The other extreme - learning without a teacher. That is, the machine is put in a situation where the correct solutions are unknown, there are only data in a raw, unlabeled form. It turns out that in such cases it is possible to achieve some success. For example, you can teach a machine to identify semantic relationships between words in a language based on the analysis of a very large set of texts.
One type of supervised learning is reinforcement learning. The idea is that the AI system acts as an agent placed in some model environment, in which it can interact with other agents, for example, with its own copies, and receive from the environment some feedback through the reward function. For example, a chess program that plays with itself, gradually adjusting its parameters and thereby gradually strengthening its own game.
Reinforcement learning is a fairly broad field and uses many interesting techniques ranging from evolutionary algorithms to Bayesian optimization. Recent advances in AI for games are precisely related to the amplification of AI during reinforcement learning.
Technology Risks: Should We Be Afraid of Doomsday?
I am not one of the AI alarmists, and in this sense I am by no means alone. For example, Andrew Ng, creator of the Stanford Machine Learning course, compares the dangers of AI to the problem of overpopulation on Mars.
Indeed, in the future, it is likely that humans will colonize Mars. It is also likely that sooner or later the problem of overpopulation may arise on Mars, but it is not entirely clear why we should deal with this problem now? Agree with Yn and Yang LeKun - the creator of convolutional neural networks, and his boss Mark Zuckerberg, and Joshua Beno - a person, largely thanks to whose research modern neural networks are able to solve complex problems in the field of word processing.
It will probably take several hours to present my views on this problem, so I will focus only on the main theses.
1. DO NOT LIMIT AI DEVELOPMENT
Alarmists consider the risks associated with the potential disruption of AI while ignoring the risks associated with trying to limit or even stop progress in this area. The technological power of mankind is increasing at an extremely rapid pace, which leads to an effect that I call "cheapening the cost of the apocalypse."
150 years ago, with all the will, humanity could not cause irreparable damage to either the biosphere or itself as a species. To implement the catastrophic scenario 50 years ago, it would have been necessary to concentrate all the technological power of the nuclear powers. Tomorrow, a small handful of fanatics may be enough to bring a global man-made disaster to life.
Our technological power is growing much faster than the ability of human intelligence to control this power.
Unless human intelligence, with its prejudices, aggression, delusions and narrow-mindedness, is replaced by a system capable of making more informed decisions (whether it be AI or, what I think is more likely, a technologically improved human intelligence integrated with machines into a single system), we can waiting for a global catastrophe.
2. the creation of superintelligence is fundamentally impossible
There is an idea that the AI of the future will certainly be super-intelligent, superior to humans even more than humans are superior to ants. In this case, I'm afraid to disappoint technological optimists as well - our Universe contains a number of fundamental physical limitations, which, apparently, will make the creation of superintelligence impossible.
For example, the speed of signal transmission is limited by the speed of light, and the Heisenberg uncertainty appears on the Planck scale. This implies the first fundamental limit - the Bremermann limit, which imposes restrictions on the maximum computational speed for an autonomous system of a given mass m.
Another limit is related to Landauer's principle, according to which there is a minimum amount of heat released when processing 1 bit of information. Too fast calculations will cause unacceptable heating and destruction of the system. In fact, modern processors are less than a thousand times behind the Landauer limit. It would seem that 1000 is quite a lot, but another problem is that many intellectual tasks belong to the EXPTIME complexity class. This means that the time required to solve them is an exponential function of the dimension of the problem. Accelerating the system several times gives only a constant increase in "intelligence".
In general, there are very serious reasons to believe that a super-intelligent strong AI will not work, although, of course, the level of human intelligence may well be surpassed. How dangerous is it? Most likely not very much.
Imagine that you suddenly started thinking 100 times faster than other people. Does this mean that you will easily be able to persuade any passer-by to give you their wallet?
3. we worry about something else
Unfortunately, as a result of alarmists' speculations on the fears of the public, brought up on the Terminator and Clark and Kubrick's famous HAL 9000, there is a shift in the focus of AI security towards the analysis of unlikely but spectacular scenarios. At the same time, the real dangers slip out of sight.
Any sufficiently complex technology that claims to occupy an important place in our technological landscape certainly brings with it specific risks. Many lives were destroyed by steam engines - in production, in transport, and so on - before they were developed. effective rules and security measures.
If we talk about progress in the field of applied AI, we can pay attention to the related problem of the so-called "Digital Secret Court". More and more applied AI systems make decisions on issues affecting the life and health of people. This includes medical diagnostic systems, and, for example, systems that make decisions in banks on issuing or not issuing a loan to a client.
At the same time, the structure of the models used, the sets of factors used, and other details of the decision-making procedure are hidden from the person whose fate is at stake.
The models used can base their decisions on the opinions of expert teachers who made systematic mistakes or had certain prejudices - racial, gender.
An AI trained on the decisions of such experts will conscientiously reproduce these prejudices in its decisions. After all, these models may contain specific defects.
Few people are now dealing with these problems, since, of course, SkyNet, which unleashes nuclear war, it is certainly much more spectacular.
Neural networks as a "hot trend"
On the one hand, neural networks are one of the oldest models used to build AI systems. Initially appeared as a result of applying the bionic approach, they quickly ran away from their biological prototypes. The only exception here are impulse neural networks (however, they have not yet found wide application in the industry).
The progress of recent decades is associated with the development of deep learning technologies - an approach in which neural networks are assembled from a large number of layers, each of which is built on the basis of certain regular patterns.
In addition to the creation of new neural network models, important progress has also been made in the field of learning technologies. Today, neural networks are no longer taught with the help of central processors of computers, but with the use of specialized processors capable of quickly performing matrix and tensor calculations. The most common type of such devices today is video cards. However, even more specialized devices for training neural networks are being actively developed.
In general, of course, neural networks today are one of the main technologies in the field of machine learning, to which we owe the solution of many problems that were previously solved unsatisfactorily. On the other hand, of course, you need to understand that neural networks are not a panacea. For some tasks, they are far from the most effective tool.
So how smart are today's robots really?
Everything is relative. Against the background of the technologies of the year 2000, the current achievements look like a real miracle. There will always be people who like to grumble. 5 years ago, they were talking with might and main that machines will never beat people in Go (or at least they won't win very soon). It was said that a machine would never be able to draw a picture from scratch, while today people are practically unable to distinguish between pictures created by machines and paintings by artists unknown to them. At the end of last year, machines learned to synthesize speech, almost indistinguishable from human, and in last years ears do not wither from the music created by machines.
Let's see what happens tomorrow. I look at these applications of AI with great optimism.
Promising directions: where to start diving into the field of AI?
I would advise you to try to master at a good level one of the popular neural network frameworks and one of the programming languages popular in the field of machine learning (the most popular today is the TensorFlow + Python combination).
Having mastered these tools and ideally having a strong base in the field of mathematical statistics and probability theory, you should direct your efforts to the area that will be most interesting to you personally.
Interest in the subject of work is one of your most important assistants.
The need for machine learning specialists exists in various fields - in medicine, in banking, in science, in production, therefore today a good specialist has a wide choice as never before. The potential benefits of any of these industries seem to me insignificant compared to the fact that the work will bring you pleasure.
Artificial intelligence is the ability of a digital computer or computer-controlled robot to perform tasks normally associated with sentient beings. The term is often applied to the project of developing systems endowed with human-specific intellectual processes, such as the ability to reason, generalize, or learn from past experience. In addition, the definition of the concept of AI (artificial intelligence) is reduced to a description of a set of related technologies and processes, such as, for example, machine learning, virtual agents and expert systems. talking in simple words AI is a crude mapping of neurons in the brain. Signals are transmitted from neuron to neuron and finally output - a numerical, categorical or generative result is obtained. This can be illustrated with the following example. if the system takes a picture of a cat and is trained to recognize whether it is a cat or not, the first layer can identify the general gradients that define the overall shape of the cat. The next layer can identify larger objects such as ears and mouths. The third layer defines smaller objects (such as whiskers). Finally, based on this information, the program will print "yes" or "no" to tell if it is a cat or not. The programmer does not need to "tell" the neurons that these are the features they should be looking for. The AI learned them on its own by training on many images (both with and without cats).
What is artificial intelligence?
Description of the artificial neuron
An artificial neuron is a mathematical function conceived as a model of biological neurons, a neural network. Artificial neurons are elementary units in artificial neural networks. An artificial neuron receives one or more inputs and sums them up to produce an output or firing representing the action potential of the neuron that is transmitted along its axon. Typically, each input is analyzed separately and the sum is passed through a non-linear function known as an activation function, or transfer function.
When did AI research start?
In 1935, the British researcher A.M. Turing described an abstract computing machine that consists of infinite memory and a scanner that moves back and forth through the memory, character by character. The scanner reads what it finds, writing further characters. The actions of the scanner are dictated by a program of instructions, which is also stored in memory as symbols. The earliest successful AI program was written in 1951 by Christopher Strachey. In 1952, this program could play checkers with a person, surprising everyone with its ability to predict moves. In 1953, Turing published a classic early paper on chess programming.
The difference between artificial intelligence and natural
Intelligence can be defined as the general mental capacity for reasoning, problem solving, and learning. By virtue of its general nature, intelligence integrates cognitive functions such as perception, attention, memory, language, or planning. natural intelligence is distinguished by a conscious attitude to the world. Human thinking is always emotionally colored, and it cannot be separated from physicality. In addition, a person is a social being, therefore society always influences thinking. AI has nothing to do with emotional sphere and not socially oriented.
How to compare human and computer intelligence?
Human thinking can be compared with artificial intelligence based on several general parameters of the organization of the brain and machine. The activity of a computer, like the brain, includes four stages: encoding, storing, analyzing data, and issuing a result. In addition, the human brain and AI can self-learn depending on the data received from environment. Also, the human brain and machine intelligence solve problems (or tasks) using certain algorithms.
Do computer programs have an IQ?
No. IQ is related to the development of a person's intelligence depending on age. AI in some ways exceeds some human abilities, for example, it can remember great amount numbers, but it has nothing to do with IQ.
What is the Turing test?
Alan Turing developed an empirical test that shows whether the program is able to capture all the nuances of human behavior to such an extent that a person cannot determine with whom exactly he is communicating - with AI or with a live interlocutor. Turing suggested that an outside observer evaluate the conversation between a person and a machine that answers questions. The judge does not see who exactly answers, but knows that one of the interlocutors is an AI. The conversation is limited to only the text channel (computer keyboard and screen), so the result is not affected by the machine's ability to render words as human speech. If the program manages to deceive a person, it is considered that it has effectively coped with the test.Symbolic approach
The symbolic approach to AI is the totality of all artificial intelligence research methods based on high-level symbolic ( human readable) ideas about tasks, logic and search. The symbolic approach was widely used in AI research in the 1950s and 80s. One popular form of the symbolic approach is expert systems, which use a combination of specific production rules. Production rules link symbols into logical relationships that are similar to the If-Then algorithm. The expert system processes the rules to draw conclusions and determine what additional information it needs, that is, what questions to ask using human-readable symbols.
logical approach
The term "logical approach" implies an appeal to logic, reasoning, solving problems with the help of logical steps. Logicians back in the 19th century developed precise notations for all kinds of objects in the world and the relationships between them. By 1965, there were programs that could solve any logical task(the peak of popularity of this approach came in the late 1950s–70s). Supporters logical approach within the framework of logical artificial intelligence, they hoped to build intelligent systems on such programs (in particular, written in the Prolog language). However, this approach has two limitations. First, it is not easy to take informal knowledge and put it into the formal terms required for AI processing. Secondly, there is a big difference between solving a problem in theory and solving it in practice. Even problems with a few hundred facts can exhaust the computational resources of any computer if it does not have any indication of which reasoning to use first.
Agent Based Approach
An agent is something that acts (from Latin agere, "to do"). Of course, all computer programs do something, but computer agents are expected to do more: work autonomously, perceive environmental signals (using special sensors), adapt to changes, create goals and execute them. A rational agent is one who acts in such a way as to achieve the best expected outcome.
Hybrid approach
It is assumed that this approach, which became popular in the late 80s, works most effectively, as it is a combination of symbolic and neural models. The hybrid approach increases the cognitive and computational capabilities of the machine.
Artificial intelligence technology market
The market is expected to grow to $190.61 billion by 2025, with an annual growth rate of 36.62%. The growth of the market is influenced by factors such as the introduction of cloud applications and services, the emergence of big data arrays and active demand for intelligent virtual assistants. However, there are still few experts developing and implementing AI technologies, and this is holding back the growth of the market. AI-powered systems require integration and maintenance support.
Processors for AI
Modern AI tasks require powerful processors that can process huge amounts of data. Processors must have access to large amounts of memory, and the device also needs high-speed data links.In Russia
At the end of 2018, a series of Elbrus-804 servers was launched in Russia, showing high performance. Each of the computers is equipped with four eight-core processors. Using these devices, you can build computing clusters, they allow you to work with applications and databases.
World market
Drivers and market leaders are two corporations - Intel and AMD, manufacturers of the most powerful processors. Intel has traditionally focused on making machines with higher clock speeds, AMD is focused on constantly increasing the number of cores and providing multi-threaded performance.
National Development Concept
Three dozen countries have already approved national strategies for the development of AI. In October 2019, the draft National Strategy for the Development of AI should be adopted in Russia. It is assumed that Moscow will introduce a legal regime that facilitates the development and implementation of AI technologies.
AI Research
The questions of what is artificial intelligence and how it works excite scientists different countries for more than a decade now. The US government allocates $200 million annually for research. In Russia, for 10 years - from 2007 to 2017 - about 23 billion rubles were allocated. Sections on supporting AI research will become an important part of the concept of the national strategy. Soon new ones will open in Russia scientific centers, and the development of innovative AI software will continue.
AI standardization
Norms and rules in the field of AI in Russia are in the process of constant improvement. It is assumed that in late 2019 - early 2020, national standards will be approved, which are now being developed by market leaders. In parallel, a National Standardization Plan for 2020 and beyond is being formed. The standard “Artificial Intelligence. Concept and terminology”, and in 2019 experts began to develop its Russified version. The document must be approved in 2021.
The impact of artificial intelligence
The introduction of AI is inextricably linked with scientific and technological progress, and the scope of application is expanding every year. We face this every day in life, when a large retail chain on the Internet recommends a product to us, or when we open the computer, we see an advertisement for a movie that we just wanted to watch. These recommendations are based on algorithms that analyze what the consumer bought or watched. Artificial intelligence is behind these algorithms.
Is there a risk to the development of human civilization?
Elon Musk believes that the development of AI may threaten humanity and the results may be worse than the use of nuclear weapons. Stephen Hawking, a British scientist, fears that people can create artificial intelligence with superintelligence that can harm a person.On the economy and business
The penetration of AI technology into all areas of the economy will increase the volume of the global market for services and goods by $15.7 trillion by 2030. The US and China are still leaders in terms of all kinds of projects in the field of AI. Developed countries - Germany, Japan, Canada, Singapore - also strive to realize all the possibilities. Many moderately growing economies, such as Italy, India, Malaysia, are developing strengths in specific AI applications.
To the labor market
The global impact of AI on the labor market will follow two scenarios. First, the spread of some technologies will lead to the dismissal of a large number of people, since computers will take over many tasks. Second, due to the development technical progress AI professionals will be in high demand in many industries.
AI bias
AI system bias is likely to become an increasingly common problem as AI moves out of the lab into real world. Researchers fear that without adequate training in data assessment and identification of the potential for data bias, vulnerable groups in society may be harmed or their rights infringed. Until now, researchers have no data on whether systems built on the basis of machine learning will threaten humanity.
Applications
Artificial intelligence and its applications are undergoing a transformation. The definition of Weak AI ("weak AI") is used when we are talking about the implementation of narrow tasks in medical diagnostics, electronic trading platforms, robot control. Whereas Strong AI (“strong AI”) is defined by researchers as an intellect that is faced with global tasks, as if they were set for a person.
Defense and military use
By 2025, the sales of related services, software and hardware on a global scale will rise to $18.82 billion, and the annual market growth will be 14.75%. AI is used for data aggregation, bioinformatics, military training, and the defense sector.In education
Many schools include AI introductory classes in their computer science curriculum, and universities are making extensive use of big data technologies. Some programs monitor student behavior, grade tests and essays, recognize pronunciation errors, and suggest corrections.
There are also online courses on artificial intelligence. For example, at educational portal.
In business and trade
In the next five years, leading retailers will have mobile apps that work with digital assistants like Siri to make shopping easier. AI allows you to earn huge amounts of money on the Internet. One example is Amazon, which constantly analyzes consumer behavior and improves algorithms.
Where can I learn about #artificial intelligence
In the power industry
AI helps predict generation and demand for energy resources, reduce losses, and prevent resource theft. In the electric power industry, the use of AI in the analysis of statistical data helps to choose the most profitable supplier or automate customer service.
In the manufacturing sector
According to a McKinsey survey of 1,300 executives, 20% of businesses are already using AI. Recently, Mosselprom implemented AI in its production in the packaging shop. Uses AI's ability to recognize an image. The camera captures all the actions of the employee by scanning the barcode printed on the clothing and sends the data to the computer. The number of transactions performed directly affects the employee's remuneration.
In brewing
Carlsberg uses machine learning to select yeast and expand its range. The technology is implemented on the basis of a digital cloud platform.In banking
The need for reliable data processing, the development of mobile technologies, the availability of information and the spread of open source software make AI a technology in demand in the banking sector. More and more banks are raising funds through mobile app development companies. New technologies are improving customer service, and analysts predict that within five years, AI in banks will make most decisions on its own.
On transport
The development of AI technologies is the driver of the transport industry. Road condition monitoring, detection of pedestrians or objects in the wrong places, autonomous driving, cloud services in the automotive industry are just a few examples of the use of AI in transport.
In logistics
The power of AI is enabling companies to better predict demand and build supply chains more cost-effectively. AI helps to reduce the number of vehicles needed for transportation, optimize delivery times, and reduce the operating costs of transport and storage facilities.
In the market of luxury goods and services
Luxury brands have also turned to digital to analyze customer needs. One of the challenges facing developers in this segment is managing and influencing customer emotions. Dior is already adapting AI to manage customer-brand interactions through chatbots. Luxury brands will compete in the future and the level of personalization they can achieve with AI will be decisive.
In public administration
The state apparatuses of many countries are not yet ready for the challenges that are hidden in AI technologies. Experts predict that many of the existing government structures and processes that have evolved over the past few centuries are likely to become irrelevant in the near future.
In forensics
Different AI approaches are used to identify criminals in public places. In some countries, such as the Netherlands, the police are using AI to investigate complex crimes. Digital forensics is an emerging science that requires the mining of huge amounts of very complex datasets.In the judiciary
Developments in the field of artificial intelligence will help to radically change the judicial system, make it more fair and free from corruption. One of the first AI in the judicial system began to use China. It can be assumed that judge robots will eventually be able to operate with big data from storages. public services. Machine intelligence analyzes a huge amount of data, and it does not experience emotions like a human judge. AI can have a huge impact on the processing of information and the collection of statistics, as well as predicting possible offenses based on data analysis.
In sports
The application of AI in sports has become commonplace in recent years. Sports teams (baseball, football, etc.) analyze individual player performance data, taking into account various factors in the selection. AI can predict the future potential of players by analyzing game technique, the physical state and other data, as well as to evaluate their market value.
In healthcare medicine
This area of application is growing rapidly. AI is being used in disease diagnosis, clinical research, drug development, and health insurance. In addition, there is now a boom in investment in numerous medical applications and devices.
Analysis of citizens' behavior
Observation of the behavior of citizens is widely used in the field of security, including monitoring the behavior on websites (in social networks) and in messengers. For example, in 2018, Chinese scientists managed to identify 20 thousand potential suicides and provide them with psychological help. In March 2018, Vladimir Putin instructed to intensify actions government agencies to combat negative impact destructive movements in social networks.In the development of culture
AI algorithms start to generate works of art, which are difficult to distinguish from those created by man. AI offers creative people many tools to bring ideas to life. Right now, the understanding of the role of the artist in a broad sense is changing, since AI provides a lot of new methods, but also poses many new questions for humanity.
Painting
Art has long been considered the exclusive sphere of human creativity. But it turns out that machines can do a lot more in the creative field than people realize. In October 2018, Christie's sold the first AI painting for $432,500. A generative adversarial network algorithm was used, which analyzed 15,000 portraits created between the 15th and 20th centuries.
Music
Several music programs have been developed that use AI to create music. As in other areas, AI in this case also simulates a mental task. A notable feature is the ability of an AI algorithm to learn from information received, such as computer tracking technology that is capable of listening to and following a human performer. The AI also drives what is known as interactive compositing technology, in which a computer composes music in response to a live musician performing. In early 2019, Warner Music signed the first-ever contract with a performer - the Endel algorithm. Under the terms of the contract, the Endel neural network will release 20 unique albums during the year.
Photo
AI is rapidly changing the way we think about photography. In just a couple of years, most of the advances in this field will be focused on AI, and not on optics or sensors, as before. For the first time, advances in photography technology will be unrelated to physics and will create a completely new way photothinking. Even now, the neural network recognizes the slightest changes when modeling faces in photo editors.
Video: face swap
In 2015, Facebook began testing DeepFace technology on the site. In 2017, Reddit user DeepFakes came up with an algorithm to create realistic face swap videos using neural networks and machine learning.
Media and literature
In 2016, Google AI, after analyzing 11,000 unpublished books, began writing its first literary works. Facebook AI Research researchers in 2017 came up with a neural network system that can write poetry on any topic. In November 2015, the direction of preparing automatic texts was opened by the Russian company Yandex.
Go games, poker, chess
In 2016, an AI beat a human in Go (a game with over 10,100 options). In chess, the supercomputer defeated the human player because of the ability to store in memory moves ever played by people and program new ones 10 steps ahead. Poker is now played by bots, although it used to be thought that it was almost impossible to train a computer to play this card game. Every year developers improve algorithms more and more.Face recognition
Face recognition technology is used for both photo and video streams. Neural networks build a vector, or “digital”, face template, then these templates are compared within the system. She finds reference points on the face that define individual characteristics. The algorithm for calculating the characteristics is different for each of the systems and is the main secret of the developers.
For further development and the use of AI, it is necessary to train, first of all, a person
Sergey Shirkin
Dean of the Faculty of Artificial Intelligence
Artificial intelligence technologies in the form in which they are used now have existed for about 5-10 years, but in order to apply them, oddly enough, a large number of people are required. Accordingly, the main costs in the field of artificial intelligence are the costs of specialists. Moreover, almost all the basic technologies of artificial intelligence (libraries, frameworks, algorithms) are free and are in the public domain. At one time, finding machine learning experts was almost impossible. But now, largely due to the development of MOOC (eng. Massive Open Online Course, massive open online course), there are more of them. higher educational institutions also supply specialists, but they often have to finish their studies in online courses.
Now artificial intelligence may well recognize that a person is planning to change jobs, and can offer him appropriate online courses, many of which can be taken with only a smartphone. And this means that you can practice even while on the road - for example, on the way to work. One of the first such projects was the online resource Coursera, but later many similar projects appeared. educational projects, each of which occupies a specific niche in online education.
You need to understand that AI, like any program, is primarily a code, that is, a text designed in a certain way. This code needs development, maintenance and improvement. Unfortunately, this does not happen by itself; without a programmer, the code cannot “come to life”. Therefore, all fears about the omnipotence of AI are unfounded. Programs are created for strictly defined tasks, they do not have feelings and aspirations like a person, they do not perform actions that the programmer did not put into them.
It can be said that in our time, AI has only individual human skills, although it can outpace the average person in the speed of their application. True, many hours of effort of thousands of programmers are spent on developing each such skill. The most that AI is capable of so far is to automate some physical and mental operations, thereby freeing people from routine.
Does the use of AI carry any risks? Rather, now there is a risk of not seeing the possibility of using artificial intelligence technologies. Many companies are aware of this and are trying to develop in several directions at once, in the hope that one of them can "shoot". An illustrative example is online stores: now only those who realized the need to use AI, when it was not yet in trend, remained afloat, although it was quite possible to “save money” and not invite the necessary mathematician-programmers for no reason.
The prospect of development of artificial intelligence
Computers can now do many things that only humans used to be able to do: play chess, recognize letters of the alphabet, check spelling, grammar, recognize faces, dictate, speak, win game shows, and more. But skeptics persist. Once a human ability is automated, skeptics say it's just another computer program and not an example of self-learning AI. AI technologies are only finding wide application and have huge growth potential in all areas. Over time, humanity will create more and more powerful computers, which will be more and more improved in the development of AI.
Is the purpose of AI to put the human mind into a computer?
There is only a rough understanding of how the human brain works. So far, not all properties of the mind can be imitated using AI.
Can AI reach human levels of intelligence?
Scientists strive to ensure that AI can solve even more diverse problems. But it is premature to talk about reaching the level of human intelligence, since thinking is not limited to only one algorithm.
When can artificial intelligence reach the level of human thinking?
On this stage accumulation and analysis of information, which is now achieved by mankind, AI is far from human thinking. However, in the future, breakthrough ideas may arise that will affect a sharp jump in the development of AI.
Can a computer become an intelligent machine?
A part of any complex machine is a computer system, and here it is possible to speak only of intelligent computer systems. The computer itself is not intelligent.
Is there a connection between speed and the development of intelligence in computers?
No, speed is responsible only for some properties of intelligence. By itself, the speed of processing and analyzing information is not enough for intelligence to appear.
Is it possible to create a children's machine that could develop through reading and self-learning?
This has been discussed by researchers for almost a hundred years. Probably, the idea will someday be implemented. Today, AI programs do not process and use as much information as children can.
How are computability theory and computational complexity related to AI?
Computational complexity theory focuses on classifying computational problems according to their inherent complexity and relating these classes to each other. A computational problem is a problem solved by a computer. The calculation problem is solvable by the mechanical application of mathematical steps, such as an algorithm.
Conclusion
Artificial intelligence has already had a huge impact on the development of our world, which was impossible to predict even a century ago. Smart phone networks route calls more efficiently than any human operator. Cars are built in unmanned factories by automated robots. Artificial intelligence is being integrated into the most common household items, such as a vacuum cleaner. The mechanisms of AI are not fully understood, but experts predict that the development of AI will come even closer to the development of the human brain in the coming years.