LogNNet - neural network which uses filters based on logistic mapping. LogNNet has a feedforward network structure, but possesses the properties of reservoir neural networks. The input weight matrix, set by a recurrent logistic mapping, forms the kernels that transform the input space to the higher-dimensional feature space. The most effective recognition of a handwritten digit from MNIST-10 occurs under chaotic behavior of the logistic map. The correlation of classification accuracy with the value of the Lyapunov exponent was obtained. An advantage of LogNNet implementation on IoT devices is the significant savings in memory used. At the same time, LogNNet has a simple algorithm and performance indicators comparable to those of the best resource-efficient algorithms available at the moment. The presented network architecture uses an array of weights with a total memory size from 1 to 29 kB and achieves a classification accuracy of 80.3–96.3%. Memory is saved due to the processor, which sequentially calculates the required weight coefficients during the network operation using the analytical equation of the logistic mapping. The proposed neural network can be used in implementations of artificial intelligence based on constrained devices with limited memory, which are integral blocks for creating ambient intelligence in modern IoT environments. From a research perspective, LogNNet can contribute to the understanding of the fundamental issues of the influence of chaos on the behavior of reservoir-type neural networks.
In the age of neural networks and Internet of Things (IoT), the search for new neural network architectures capable of operating on devices with small amounts of memory (10s of kB of RAM) is becoming an urgent agenda [1][2][3]. The constrained devices possess significantly less processing power and memory than a regular smartphone or modern laptop, and usually do not have a user interface [4]. The constrained devices form the basis for ambient intelligence (AmI) in IoT environments [5], and can be divided into three categories based on code and memory sizes: class 0 (less than 100 KB Flash and less than 1 KB RAM), class 1 ( ≈100 KB Flash and ≈ 10 KB RAM) and class 2 ( ≈ 250 KB Flash and ≈ 50 KB RAM) [6]. Smart objects, which are built on constrained devices, possess limited system resources, and these limitations prohibit the application of security mechanisms common in the Internet environment. Complex cryptographic mechanisms require significant time and high energy resources, and in addition, the storage of a large number of keys for secure data transition is not possible on the constrained devices. The storage issue presents significant challenges for the integration of blockchain [7] and artificial intelligence (AI) [1][8] with IoT and calls for new approaches and research efforts to resolve this limitation. Intelligent IoT devices should be able to process the incoming information without sending it to the cloud. Smart objects equipped with their own AI capabilities spend significantly less time on the analysis of incoming data and development of a final solution, creating new possibilities, for example, in the development of AmI in the medical industry [9], predicting the behavior of mechanisms [10], local semantic processing of video data [11] and “smart” services in e-tourism [12]. This approach facilitates the development of the concepts of smart spaces and fog computing, when devices detect each other, for example, using wireless technologies [13]; redistribute computational tasks; and optimize the distribution of responses [14][15]. Therefore, the integration of AI and IoT creates new social, economic and technological benefits.
Neural networks create the foundation for AI. The well-known types of neural networks are feedforward neural networks, convolutional neural networks and recurrent neural networks [16]. In addition to neural networks, the popular methods for classification and efficient prediction include tree-based algorithms [17] and the k-nearest neighbors algorithm (kNN) [18]. The problem of implementing compact and efficient algorithms for the operation of neural networks on constrained devices is the main constraining factor in the development of the integration of AI and IoT. Training of neural networks is the process of optimal selection of the weight coefficients of neuron couplings and filter parameters that occupy a significant amount of memory. It highlights the importance of the development of the methods for reducing the memory consumed by a neural network.
The prominent tree-based algorithms include the GBDT algorithms [19] and Bonsai from Microsoft [17], which allows efficient use of memory and runs on constrained devices with 2–16 kB RAM. An algorithm called ProtoNN [20], based on the kNN method, operates on devices with 16 kB RAM. Currently, effective memory redistribution algorithms in convolutional neural networks (CNN) are available and do not exceed 2 kB of consumed RAM [21], and demonstrated impressive results of classification accuracy ≈ 99.15% on MNIST-10 database. However, the complexity of the algorithms leads to the large size of the program itself. Algorithms based on the recurrent network architecture, such as Spectral-RNN [22] and FastGRNN [18], achieve ≈ 98% classification accuracy on MNIST-10 with a model size of ≈ 6 kB.
An alternative approach to reduce the number of trained weights is based on physical reservoir computing (RC) [23]. RC uses complex physical dynamic systems (coupled oscillators [24][25][26], memristor crossbar arrays [27], opto-electronic feedback loop [28]), or recurrent neural networks (echo state networks (ESNs) [29] and liquid state machines (LSMs) [30]), as reservoirs with rich dynamics and powerful computing capabilities. The couplings in the reservoir are not trained, but are specified in a special way. The reservoir translates the input data into a higher dimensionality of space (kernel trick [31]), and the output neural network, after training, can classify the result more accurately. The search for simple algorithms for simulating complex reservoir dynamics is an important research task.
Figure 1. LogNNet architecture
The network architecture proposed by Andrei Velichko, called LogNNet, is shown in the figure 1. The signal propagates in the same way as in a feedforward network. The weights W1 are not trained, but are set in a special way. The first line of the array of weights is set by the sine equation, depending on two parameters A and B. The rest of the lines are calculated using the recurrent logistic mapping, with a simple equation with one parameter r. In this way, a reservoir with fixed weights W1 is created, and training is carried out only for the output weights W2.
The entry is from https://doi.org/10.3390/electronics9091432
[1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19][20][21][22][23][24][25][26][27][28][29][30]