PANN: A New Artificial Intelligence Technology. Tutorial. Boris Zlotin. Читать онлайн. Newlib. NEWLIB.NET

Автор: Boris Zlotin
Издательство: Издательские решения
Серия:
Жанр произведения:
Год издания: 0
isbn: 9785006423817
Скачать книгу
target="_blank" rel="nofollow" href="#image2_6683f31e6836471a76b367e3_jpg.jpeg"/>

      Fig. 3. Single-neuron multi-level PANN network

      2.2. PROGRESS NEURON TRAINING

      Training a PANN network is much easier than training any classical network.

      The difficulties of training classical neural networks are related to the fact that when training several different ones, some images affect the synaptic weights of other images and introduce distortions in training into each other. Therefore, one must select weights so their set corresponds to all images simultaneously. To do this, they use the gradient descent method, which requires many iterative calculations.

      A fundamentally different approach was developed to train the PANN network: «One neuron, one image,» in which each neuron trains its own image. At the same time, there are no mutual influences between different neurons, and training becomes fast and accurate.

      The training of the Progress neuron to a specific image boils down to the distributor determining the signal level (in the simplest case, its amplitude or RGB value) and closing the switch corresponding to the range of weights in which this value falls.

      Fig. 4. Trained single-neuron multi-level PANN network

      The above training procedure of the Progress neuron gives rise to several remarkable properties of the PANN network:

      1. Training does not require computational operations and is very fast.

      2. One neuron’s set of synaptic weights is independent of other neurons. Therefore, the network’s neurons can be trained individually or in groups, and then the trained neurons or groups of neurons can be combined into a network.

      3. The network can retrain – i.e., it is possible to change, add, and remove the necessary neurons at any time without affecting the neurons unaffected by these changes.

      4. A trained image neuron can be easily visualized using simple color codes linking the included weights’ levels to the pixels’ brightness or color.

      2.3. THE CURIOUS PARADOX OF PANN

      At first glance, the PANN network looks structurally more complex than classical Artificial Neural Networks. But in reality, PANN is simpler.

      The PANN network is simpler because:

      1. The Rosenblatt neuron has an activation factor; in other words, the result is processed using a nonlinear logistic (sigmoid) function, an S-curve, etc. This procedure is indispensable, but it complicates the Rosenblatt neuron and makes it nonlinear, which leads to substantial training problems. In contrast, the Progress neuron is strictly linear and does not cause any issues.

      2. The Progress neuron has an additional element called a distributor, which is a simple logic device: a demultiplexer. It switches the signal from one input to one of several outputs. In the Rosenblatt neuron, weights are multi-bit memory cells that can store numbers over a wide range, while in PANN, the most superficial cells (triggers) can be used, which can store only the numbers 1 and 0.

      3. Unlike classic networks, PANN does not require huge memory and processing power of a computer, so cheap computers can be used, and much less electricity is required.

      4. PANN allows you to solve complex problems on a single-layer network.

      5. PANN requires tens or even hundreds of times fewer images in the training set.

      Thus, it is possible to create full-fledged products based on PANN, using computer equipment that is not very expensive and economical in terms of energy consumption.

      Fig. 5. Long and expensive training vs. fast and cheap

      2.4. THE MATHEMATICAL BASIS OF RECOGNITION

      ON THE PROGRESS NEURON

      The linearity of the Progress neuron leads to the fact that the network built on these neurons is also linear. This fact ensures its complete transparency, the simplicity of the theory describing it, and the mathematics applied.

      In 1965, Lotfi Zadeh introduced the concept of «fuzzy sets» and the idea of «fuzzy logic.» To some extent, this served as a clue for our work in developing PANN’s mathematical basis and logic. Mathematical operations in PANN aim to compare inexactly matching images and estimate the degree of their divergence in the form of similarity coefficients.

      2.4.1. Definitions

      In 2009, an exciting discovery was made called the «Marilyn Monroe neuron» or, in other sources, «grandmother’s neuron.» In the human mind, knowledge on specific topics is «divided» into individual neurons and neuron groups, which are connected by associative connections so that excitation can be transmitted from one neuron to another. This knowledge and the accepted paradigm of «one neuron, one image» made building the PANN recognition system possible.

      Let’s introduce the «neuron-image» concept – a neuron trained for a specific image. In PANN, each neuron-image is a realized functional dependency (function) Y = f (X), wherein:

      X is a numerical array (vector) with the following properties:

      for X = A, f (A) = N

      for X ≠ A, f (A) <N

      A is a given value.

      N is the dimension of vector X, the number of digits in this vector.

      This format, called the Binary Comparison Format (BCF), is a rectangular binary digital matrix in which:

      • The number of columns is equal to the length N (the number of digits) of the array.

      • The number of rows equals the number of weight levels K selected for the network.

      • Each significant digit is denoted by one (1) in the corresponding line, and the absence of a digit is denoted by zero (0).

      • Each string corresponds to some significant digit of the numeric array to be written, i.e., in a string marked as «zero,» the digit «1» corresponds to the digit «0» in the original array, and in a string marked as «ninth,» the digit «1» corresponds to the digit 9 in the array.

      • In each column of the matrix, one unit corresponds to the value of this figure, and all other values in this column are equal to 0.

      • The sum of all units in the array matrix is equal to the length N of the array; for example, for an array of 20 digits, it is 20.

      • The total number of zeros and ones in the matrix of each array is equal to the product of the length N of this array and the value of the base of the number system used.

      Example: BCF notation of an array of 20 decimal digits [1, 9, 3, 6, 4, 5, 4, 9, 8, 7, 7, 1, 0, 7, 8, 0, 9, 8, 0,2].

      Fig. 6. BCF image as a sparse binary matrix

      A feature of the PANN network is that the image training of neurons typical of neural networks can be replaced by reformatting files that carry numerical dependencies to the BCF format or simply loading files in this format to the network.

      Type X arrays in BCF format are denoted as matrices |X|.

      2.4.2. Comparing Numeric Arrays

      Comparing objects or determining similarities and differences

      Determining the similarity of particular objects by comparing