What’s going on with artificial intelligence? Analysis 16 625 works over the past 25 years

Almost all of what you hear about artificial intelligence today, it is due to deep learning. This category of algorithms works with statistics to find patterns in the data, and proved to be extremely powerful in imitating human skills such as our ability to see and hear. In a very narrow extent, it can even emulate our ability to reason. These algorithms support Google search, news, Facebook, Netflix recommendation engine, and is formed such sectors as health and education.

How to develop deep learning

Despite the fact that deep learning almost single-handedly showed the artificial intelligence community, it represents only a small flash in the historical task of humanity to create its own intelligence. It was at the forefront of these searches, less than ten years. If you postpone the history of this region, it is easy to understand that soon she would move away.

“If in 2011 someone wrote that a profound training will be on the front pages of Newspapers and magazines few, we’d be like, wow, what a dope you smoke,” says Pedro Domingos, a Professor of computer science at the University of Washington and author of the book ‘The Master Algorithm’.

According to him, the sudden UPS and downs of the various methods has long characterized research in the field of AI. Every decade there has been hot competition between different ideas. Then, from time to time, the switch clicks and the whole community starts to do something one.

Our colleagues from the MIT Technology Review wanted to visualize these troubles and starts. To this end, they appealed to one of the largest open databases of scientific papers known as arXiv. They loaded excerpts from all 16 625 articles available in the section “artificial intelligence” to 18 November 2018 and traced the words mentioned over the years, to see how developed this area is.

Through their analysis revealed three major trends: a shift towards machine learning in the late 90s — early 2000s, the growing popularity of neural networks, which began in the early 2010s, and the increase of reinforcement learning in the last few years.

But first a few caveats. First, section arXiv AI dates back to 1993, and the term “artificial intelligence” refers to the 1950-th years, so the database itself constitutes only the latest Chapter in the history of this region. Second, the documents added to the database each year, represent only part of the work carried out in this area at the moment. However, the arXiv provides an excellent resource for identifying some of the major research trends and in order to see the tug of war between different ideological camps.

Paradigm machine learning

The biggest shift that the researchers found, is a departure from systems based on knowledge, in the early 2000-ies. Such computer systems are based on the idea that it is possible to encode all of human knowledge into the rules engine. Instead, scientists have turned to machine learning — the parent category of algorithms, including deep learning.

Amongst the 100 most mentioned words associated with the systems based on knowledge — “logic”, “constraint” and “rule” — decreased the most. And related machine learning — “data”, “network”, “performance” — growing more than others.

The cause of this weather change is very simple. In the 80-ies of the system based on knowledge, gained popularity among fans due to the excitement around the ambitious projects that tried to recreate in machines common sense. But when these projects started, the researchers were faced with a major problem: it was necessary to encode too many rules, to enable the system to do something useful. This led to increased costs and significantly slowed current processes.

The answer to this problem was machine learning. Instead of having to require people to hand-code hundreds of thousands of rules, this approach programs the machine to automatically extract these rules from the pile data. Similarly, this area of abandoned systems, knowledge-based, and applied to the improvement of machine learning.

Boom neural networks

In the framework of a new paradigm of machine learning, the transition to deep learning did not happen immediately. Instead, as the analysis of key terms, scientists tested a variety of methods in addition to neural networks, basic mechanisms of deep learning. Other popular methods were Bayesian networks, support vector machines and evolutionary algorithms, they all use different approaches to finding patterns in the data.

During the 1990-ies and 2000-ies between these methods there was strong competition. Then, in 2012, breakthrough led to another change in the weather. During the annual ImageNet competition, designed accelerator of progress in the field of computer vision, a researcher named Geoffrey Hinton together with his colleagues from the University of Toronto has achieved the best accuracy in image recognition with an accuracy of slightly over 10%.

Technique of deep learning, which he used, has created a new concern of the research — the first in the community of Visualizers and then beyond. As more and more scientists have started to use it to achieve impressive results, the popularity of this technique, along with the popularity of neural networks has increased dramatically.

The increase in the reinforcement learning

The analysis showed that a few years after the heyday of deep learning has occurred the third and last shift in AI research.

In addition to the various machine learning methods, there are three different types: training controlled, uncontrolled and with reinforcements. Supervised learning, which involves feeding machine labeled data, is used most often and has the most practical applications to date. However, in the last few years, reinforcement learning, which simulates the learning process of animals through the “carrot and carrot” punishments and rewards, led to the rapid growth of references to it in the works.

The idea is not new, but for many decades it did not work. “Experts in the controlled training laughed at by specialists in reinforcement learning,” says Domingos. But, as with deep learning, one turning point, suddenly put the method to the fore.

This moment came in October 2015, when AlphaGo from DeepMind trained with reinforcements, defeated the world champion in the ancient game of go. Impact on the research community was instantaneous.

The next ten years

Analysis MIT Technology Review ensures that only latest impression of competition among the ideas that characterize AI research. However, it illustrates the variability of commitment to the duplication of intelligence. “It is important to understand that no one knows how to solve this problem,” says Domingos.

Many of the methods that have been used for 25 years, appeared around the same time in 1950-ies, and are unable to meet the challenges and successes of each decade. Neural networks, for example, peaked in the 60s and a little 80s, but almost died before to regain its popularity, thanks to deep learning.

Every decade, in other words, saw the domination of other techniques: neural networks in the late 50’s and 60’s, various symbolic attempts in the ’70s, systems based on knowledge in the’ 80s, the Bayesian network in the 90s, the support vectors to zero and the neural network again in 2010.

2020 is not going to be any different, says Dr. Domingos. So the era of deep learning may soon end. But what’s next — an old technique in a new glory and a new paradigm — that is the subject of fierce disputes in the community.

“If you answer this question,” says Domingos, “I want to patent the answer.”

To catch news of artificial intelligence for the tail and follow us in Zen.

Leave a Reply

Your email address will not be published. Required fields are marked *