Recently, as neural network models have become more accurate and sophisticated, the energy consumption during training and use on traditional computers has increased. Developers around the world are developing alternative “brain-like” hardware to provide improved performance under the high computational loads of artificial intelligence systems.
Researchers at the Technion – Israel Institute of Technology and Peng Cheng Laboratory recently created a new neuromorphic computing system that supports generative and graph-based deep learning models and deep belief neural network (DBN) task capabilities.
The scientists’ research was published in the journal Nature Electronics. This system is based on silicon memristors. It is an energy-efficient device for storing and processing information. Previously we already mentioned the use of memristors in the field of artificial intelligence. The scientific community has been working on neuromorphic computing for quite some time, and the use of memristors looks very promising.
A memristor is an electronic component that can switch or regulate the flow of current in a circuit and can also store electrical charges as they pass through the circuit. Their function and structure are more similar to the synapses of the human brain than to traditional memory blocks and processors, making them well suited for running artificial intelligence models.
However, currently memristors are still primarily used in analog computing and to a much lesser extent in AI designs. The cost of using memristors is still quite high, so memristor technology is not yet widespread in the neuromorphic field.
Professor Kvatinsky and colleagues from the Technion and Peng Cheng Lab decided to circumvent this limitation. As mentioned earlier, memristors are not widely available, so the researchers decided to use commercial flash technology developed by Tower Semiconductor instead. They designed it to be similar in behavior to a memristor. They also specifically tested the system using the recently developed DBN, an old theoretical concept in machine learning. The rationale for using it was the fact that deep neural networks do not require data transformation and the input and output data are binary and digital in nature.
The scientists’ idea was to use neurons (input/output) that were binary (i.e. had a value of 0 or 1). In this study, we investigated a memristive synapse device with two floating gate terminals fabricated as part of a standard CMOS fabrication process. The result was a silicon-based memristive synapse. These artificial synapses were called silicon synapses. Neural states are fully binarized, simplifying neural circuit design, and expensive analog-to-digital and digital-to-analog converters (ADCs and DACs) are no longer needed.
Silicon synapses offer many advantages, including analog conductivity, high wear resistance, long retention times, as well as predictable cyclic degradation and reasonable device-to-device variation.
Kvatinsky and his colleagues created a deep neural network. It consists of three 19×8 memristor limited Boltzmann machines, in which two 12×8 memristor arrays are used.
The system was tested using the modified MNIST dataset. The network recognition accuracy using Y-Flash-based memristors reached 97.05%.
In the future, developers plan to expand this architecture, apply it further, and generally explore additional memistic technologies.
The architecture presented by the scientists provides a new viable solution for running constrained Boltzmann machines and other DBNs. In the future, it could serve as a basis for the development of similar neuromorphic systems and could further help improve the energy efficiency of AI systems.
You can find MATLAB code for a deep learning memristor network based on bipolar floating gate memristors (y-flash devices) on github.