Is deep learning an essential element of artificial intelligence?

In the world of artificial intelligence, the role of deep learning becomes central. Paradigms of artificial intelligence traditionally attracted the inspiration from the functioning of the human brain, but it seems that deep learning exceeded the possibilities of learning the human brain in some aspects. Deep learning undoubtedly made impressive progress, but it has its drawbacks, including high computing complexity and the need for large amounts of data.

In the light of the above fears, scientists from Bar-Ilan University in Israel give birth to an important question: should artificial intelligence include deep learning? They presented their new paperPublished in the journal Scientific Reports, which continues their previous research on the advantage of architectures reminiscent of a tree over the Conditional networks. The main goal of the new study was to determine whether complex classification tasks can be effectively trained using shallower neural networks based on brain -inspired rules, while reducing the computing load. In this article, we will present key arrangements that could be transformed by the artificial intelligence industry.

So, as we already know, the solution to complex classification tasks successfully requires training deep neural networks, consisting of tens and even hundreds of weaves and in full -combined hidden layers. This is completely different from the human brain. In deep learning, the first weopt layer detects located patterns in the input data, and subsequent layers identify the patterns on a larger scale until reliable characteristics of the input data classes are achieved.

This study shows that when using a permanent coefficient of depth of the first and second weaves, errors in the low architecture of the Leneta consisting of only five layers decrease with the number of filters in the first weopt layer in accordance with the law of power. The extrapolation of this law suggests that the generalized Lenet architecture is able to achieve low errors similar to those obtained with deep neural networks based on the Cifar-10 data.

The figure below shows training in generalized Lenet architecture. The generalized Leneta architecture for the CIFAR-10 database (input size 32 x 32 x 3 pixels) consists of five layers: two weaves using a maximum pool and three fully connected layers. The first and second weaves contain D1 and D2 filters, in which D1 / D2 ≃ 6/16. The test error graph, designated as ϵ, compared to D1 on a logarithmic scale, indicating the dependence of the law with the exponent ρ∼0,41. The neuron activation function is relu.

Generalized VGG-16 architecture is also observed a similar phenomenon of power law. However, this leads to an increase in the number of operations required to achieve a given error level compared to Lenenet.

Training in generalized VGG-16 architecture is shown in the figure below. Generalized VGG-16 architecture consisting of 16 layers, in which the number of filters in the n-compound set is DX 2N-1 (n ≤ 4), and the square element of the filter size is MX 2- (N-1) (N-1) (n ≤ 5), where MXMX 3 is the size of each input (D = 64 in the original architecture VGG-16). The test error graph, designated as ϵ, compared to D logarithmic scale for the Cifar-10 database (m = 32), indicating the dependence of the law with the exponent ρ∼0,4. The neuron activation function is relu.

The phenomenon of power law includes various generalized Lenet and VGG-16 architecture, indicating its universal behavior and suggesting quantitative hierarchical complexity of machine learning. In addition, the Act on the protection of the weave layers equal to the square element of their size multiplied by their depth asymptotically minimizes errors. An effective approach to surface learning demonstrated in this study requires further quantitative research using various databases and architectures, as well as its accelerated implementation with future specialized equipment designs.

LEAVE A REPLY

Please enter your comment!
Please enter your name here