The new study combines repetitive neural networks (RNN) with annealing concept to solve optimization problems in the real world

Problems with optimization include determining the best real response from various options, which can often be observed both in real life situations and in most scientific research areas. However, there are many complex problems that cannot be solved with simple computer methods or that would take excessive time.

Because simple algorithms are ineffective in solving these problems, experts around the world have worked on developing more effective strategies that can solve them in realistic time frames. Artificial neural networks (Ann) are at the center of some of the most promising techniques surveyed so far.

The new study conducted by the Vector Institute, University of Waterloo and Perimeter Institute for Theoretical Physics in Canada presents the annealing of variation neurons. This new optimization method combines repetitive neural networks (RNN) with the concept of annealing. With the help of a parameterized model, this innovative technique generalizes the distribution of feasible solutions of a specific problem. Its purpose is to solve the actual optimization problems using an innovative algorithm based on the RNN theory of balanced and natural language processing (NLP).

The proposed frames are based on annealing, inspired by metallurgical annealing, which involves heating the material and slow cooling to bring it to a weaker, more resistant and stable state of energy. Based on this process, simulated annealing has been developed and aims to identify numerical numbers with optimization problems.

The biggest feature of this optimization method is that it combines the performance and ability to process Ann with the advantages of simulated annealing techniques. The team used the RNNS algorithm, which showed a special promise of NLP applications. While these algorithms are usually used in NLP research to interpret the human language, scientists used them again to solve optimization problems.

Compared to more traditional implementation of a higher digital level, their method based on RNN caused better decisions, increasing the efficiency of both classic and quantum annealing procedures. Thanks to autoregressive networks, scientists were able to encode the annealing paradigm. Their strategy to solve optimization problems to a new level by directly using infrastructures used to train modern neural networks, such as tensorflow or pythorch, accelerated by GPU and TPU.

The team conducted several tests to compare the performance of the method with traditional annealing methods based on numerical simulations. In many problems with paradigmatic optimization, the proposed approach went beyond all techniques.

This algorithm can be used in a wide range of real problems of optimization in the future, enabling experts in various fields of difficulty resolving.

Scientists would like to further assess the performance of their algorithm on more realistic problems, as well as compare it with the performance of existing advanced optimization techniques. They also intend to improve their technique, replacing some components or including new ones.

You can display a full article Here

There is also a code for github:
Nervous variation
Simulated classic and quantum annealing

LEAVE A REPLY

Please enter your comment!
Please enter your name here