Recently, neural networks models have become more accurate and sophisticated, which leads to increased energy consumption during training and use on conventional computers. Developers from around the world are working on an alternative, “brain -like” equipment, to ensure better performance at high computing loads for artificial intelligence systems.
Scientists from The Technion – Israel Institute of Technology and Peng Cheng Laboratory have recently created a new neuromorphic calculation system that supports generative and graphs based on charts deep learning and the ability to work from deep neural networks (DBN).
The work of scientists was presented in the journal Nature electronics. The system is based on silicon memristors. These are energy -saving devices for storing and processing information. Earlier, we have already mentioned the use of memristors in the field of artificial intelligence. The scientific community has been working on neuromorphic calculations for a long time, and the use of memristors seems very promising.
Memristors are electronic components that can switch or adjust the electric current flow in the circuit, and can also store a load passing through the circuit. They are well suited for starting artificial intelligence, because their capabilities and structure more resemble synapses in the human brain than conventional memory blocks and processors.
But at the moment, memristors are still used mainly for analogue processing and much less in AI design. Since the cost of using memristors remains quite high, Memristive has not yet become common in the neuromorphic field.
Professor Kvatinsky and his colleagues from The Technion and Peng Cheng Lab decided to get around this limitation. As mentioned above, memristors are not widely available, so instead of Memristors, scientists decided to use Flash technology available in trade developed by Tower Semiconductor. They designed their behavior to make it similar to the Memristor. They specifically tested their system with a recently developed DBN, which is an old theoretical concept in machine learning. The reason for its use was the fact that a deep neural network does not require data transformation, its input and output data is binary and digital nature.
The idea of scientists was to use binary (i.e. with a value of 0 or 1) neurons (input/output). In this study, memristive synaptic devices with two terminals with a gate of variants made as part of the standard CMOS production process were examined. As a result, silicon memristic synapses were created. These artificial synapses were called silicon synapses. The nerve states were fully binaryized, simplifying the design of neural circuits, where expensive analog-digit and digital-analog converters (ADC and DAC) are no longer required.
Synaps of silicon offer many advantages: analog conductivity, high resistance to wear, long retention times, as well as predictable cyclical degradation and moderate variability of the device to the device.
Kvatinsky and his colleagues created a deep neural network. It consists of three Memristive with limited Boltzmann 19×8 Boltzmann machines, for which two boards of 12×8 Memristors were used.
This system has been tested using a modified multist data set. The accuracy of network recognition using Y-Flash-based memristors reached 97.05%.
In the future, developers plan to increase this architecture, use more and generally examine additional Memristive technologies.
The architecture presented by scientists offers a new profitable solution for launching limited Boltzmann and other DBN machines. In the future, it may become the basis for the development of similar neuromorphic systems and further help in improving the energy efficiency of AI systems.
You can check the MATLAB code for deep learning Memristive based on a bipolar floating gate (Y-Flash device) Girub.