Scientists from IBM Research Zurich and Eth Zurich have recently created and presented the community of neuro-functional and symbolic architecture (NVSA). This architecture synergistically combines two powerful mechanisms: deep neural networks (DNN) and vector-symbolic architecture (VSA) for coding of the visual perception interface and the server of probabilistic reasoning. Their architecture, presented Intelligence of the nature machine The Journal may overcome the restrictions on both approaches, more effectively solving progressive matrices and other tasks of reasoning.
Currently, neither deep neural networks nor symbolic artificial intelligence (AI) show the level of intelligence that we observe in humans. The main reason for this is that neural networks cannot share joint representation of data to obtain separate objects. This is known as a binding problem. On the other hand, the symbolic AI suffers from the explosion of the rules. These two problems are crucial in neuro-symbolic AI, which aims to combine the best of both paradigms.
The neuro-symbolic (NVSA) architecture has been specially designed to solve these two problems by using their powerful operators in multidimensional distributed representations, serving as a common language between neural networks and symbolic artificial intelligence. NVSA combines deep neural networks, known for its proficiency in perceptual tasks, with the VSA mechanism.
VSA is a calculation model that uses multidimensional distributed vectors and their algebraic properties to perform symbolic calculations. In VSA, all representations, from atomic structures to compositional, are multidimensional holographic vectors with the same constant dimensionality.
VSA representations can be composed, folded, studied and transformed in various ways using a set of well -defined operations, including binding, spreading, connecting, permutation, reverse permutation and aspect memory. Such components and transparent features allow the use of VSA in the reasoning of analogy, but VSA has no perception module for processing raw sensory inputs. It requires a system of perception, such as a symbolic syntactic analyzer, which provides symbolic representations supporting reasoning.
When developing NVSA, scientists focused on solving the problems of visual abstract reasoning, especially widely used IQ tests called Raven progressive matrix.
Progressive Raven matrices are tests designed to assess the level of intellectual development and ability to think abstract thinking. They assess the ability of systematic, planned and methodical intellectual activity, as well as general logical reasoning. The tests consist of a series of elements presented in sets with one or more elements. To solve Raven progressive matrices, respondents are designed to identify the missing elements in a given set from many available options. This requires advanced reasoning, such as the ability to detect abstract relations between objects that may be related to their shape, size, color or other features.
In the initial assessment of NVSA, he showed high efficiency in solving the progressive Raven matrix. Compared to modern deep neural networks and NVSA neuro-symbolic approaches, it reached a new average accuracy record of 87.7% in the Raven data set. NVSA also reached the highest accuracy of 88.1% in the I-Raven data set, while most approach to deep learning suffered significant decreases in accuracy, on average less than 50%. NVSA also allows real -time calculations on processors, which are 244 times faster than functionally equivalent symbolic logical reasoning.
To solve the Raven matrices with a symbolic approach, a probabilistic abduction method was used. It includes searching for a solution in a space defined by previous knowledge about the test. Previous knowledge is represented in a symbolic form by describing all possible implementation of rules that can rule Kruk tests. In this approach, in order to look for a solution, all the correct combinations should be moved, the probability of rules should be calculated, and their amounts should be collected. These calculations are intensive computing, which becomes a bottleneck in search due to the large number of combinations that cannot be exhausted.
NVSA does not encounter this problem because it is able to perform such extensive probabilistic calculations in one vector operation. This allows you to solve tasks such as progressive Raven Macary Raven faster and more accurately than other AI approaches based solely on deep neural networks or VSA. This is the first example showing how probabilistic reasoning can be efficiently performed using distributed representations and VSA operators.
NVSA is an important step towards integrating various AI paradigms with unified frames for solving tasks related to both perception and higher level reasoning. Architecture showed a great promise of effective and quickly solving complex logical problems. In the future, it can be further testing and applied to various other problems, potentially inspiring researchers to develop similar approaches.
A library that implements NVSA functions is available Girub.
You can find a complete example of the Raven matrix solution Here.