1000/1000
Hot
Most Recent
Neuromorphic computing systems aims at processing information in a way similar to the human brain. Instead of a conventional von Neumann computer, a neuromorphic system generally relies on a neural network, where the memory and the processing elements are intimately co-located within the same hardware. Neuromorphic computing takes advantage of computational memories, which can both store and process data via physical laws within the device and/or the circuit. This entry summarizes the history and main concepts of neuromorphic computing, including both deep neural networks (DNNs) which are adopted for extensive artificial intelligence tasks, such as driverless cars, and spiking neural networks (SNNs), which aim at a more realistic brain-inspired computation.
The origin of neuromorphic computing can be traced back to 1949, when McCulloch and Pitts proposed a mathematical model of the biological neuron. This is depicted in Figure 1a, where the neuron is conceived as a processing unit, operating (i) a summation of input signals (x1, x2, x3, …), each multiplied by a suitable synaptic weight (w1, w2, w3, …) and (ii) a non-linear transformation according to an activation function, e.g., a sigmoidal function[1]. A second landmark came in 1957, when Rosenblatt developed the model of a fundamental neural network called multiple-layer perceptron (MLP)[2], which is schematically illustrated in Figure 1b. The MLP consists of an input layer, one or more intermediate layers called hidden layers, and an output layer, through which the input signal is forward propagated toward the output. The MLP model constitutes the backbone for the emerging concept of deep neural networks (DNNs). DNNs have recently shown excellent performance in tasks, such as pattern classification and speech recognition, via extensive supervised training techniques, such as the backpropagation rule[3][4][5]. DNNs are usually implemented in hardware with von Neumann platforms, such as the graphics processing unit (GPU)[6] and the tensor processing unit (TPU)[7], used to execute both training and inference. These hardware implementations, however, reveal all the typical limitations of the von Neumann architecture, chiefly the large energy consumption in contrast with the human brain model.
Figure 1. (a) Conceptual illustration of McCulloch and Pitts artificial neuron architecture, where the weighted sum of the input signals is subject to the application of a non-linear activation function yielding the output signal. (b) Schematic representation of a multilayer perceptron consisting of two hidden layers between the input and the output layer.
To significantly improve the energy efficiency of DNNs, matrix-vector multiplication (MVM) in crossbar memory arrays has emerged as a promising approach[8][9]. Memory devices also enable the implementation of learning schemes able to replicate the biological synaptic plasticity at the device level. CMOS memories, such as the static random access memory (SRAM)[10][11] and the Flash memory[12], were initially adopted to capture synaptic behaviors in hardware. In the last 10 years, novel material-based memory devices, generically referred to as memristors[13], have evidenced attractive features for the implementation of neuromorphic hardware, including non-volatile storage, low-power operation, nanoscale size, and analog resistance tunability. In particular, memristive technologies, which include resistive switching random access memory (RRAM), phase change memory (PCM), and other emergent memory concepts based on ferroelectric and ferromagnetic effects, have been shown to achieve synapse and neuron functions, enabling the demonstration of fundamental cognitive primitives as pattern recognition in neuromorphic networks[14][15][16][17].
The field of neuromorphic networks includes both the DNN[18], and the spiking neural network (SNN), the latter more directly inspired by the human brain[19]. Contrary to DNNs, the learning ability in SNNs emerges via unsupervised training processes, where synapses are potentiated or depressed by bio-realistic learning rules inspired by the brain. Among these local learning rules, spike-timing-dependent plasticity (STDP) and spike-rate-dependent plasticity (SRDP) have received intense investigation for hardware implementation of brain-inspired SNNs. In STDP, which was experimentally demonstrated in hippocampal cultures by Bi and Poo in 1998[20], the synaptic weight update depends on the relative timing between the pre-synaptic spike and the post-synaptic spike (Figure 2a). In particular, if the pre-synaptic neuron (PRE) spike precedes the post-synaptic neuron (POST) spike, namely the relative delay of spikes, Δt = tpost − tpre, is positive, then the interaction between the two spikes causes the synapse to increase its weight, which goes under the name of synaptic potentiation. On the other hand, if the PRE spike follows the POST spike, i.e., Δt is negative, then the synapse undergoes a weight decrease or synaptic depression (Figure 2b). In SRDP, instead, the rate of spikes emitted by externally stimulated neurons dictates the potentiation or depression of the synapse, with high and low frequency stimulation leading to synaptic potentiation and depression, respectively[21]. Unlike STDP relying on pairs of spikes, SRDP has been attributed to the complex combination of three spikes (triplet) or more[22][23][24][25]. In addition to the ability to learn in an unsupervised way and emulate biological processes, SNNs also offer a significant improvement in energy efficiency thanks to the ability to process data by transmission of short spikes, hence consuming power only when and where the spike occurs[26]. Therefore, CMOS and memristive concepts can offer great advantages in the implementation of both DNNs and SNNs, providing a wide portfolio of functionalities, such as non-volatile weight storage, high scalability, energy efficient in-memory computing via MVM, and online weight adaptation in response to external stimuli.
Figure 2. (a) Sketch of the spike-timing-dependent plasticity (STDP) learning rule. If the PRE spike arrives just before the POST spike at the synaptic terminal (Δt > 0), the synapse undergoes a potentiation process, resulting in a weight (conductance) increase (top). Otherwise, if the PRE spike arrives just after the POST spike (Δt < 0), the synapse undergoes a depression process, resulting in a weight (conductance) decrease (bottom). (b) Relative change of synaptic weight as a function of the relative time delay between PRE and POST spikes measured in hippocampal synapses by Bi and Poo. Reprinted with permission from[20]. Copyright 1998 Society for Neuroscience.