PANN: A New Artificial Intelligence Technology. Tutorial. Boris Zlotin. Читать онлайн. Newlib. NEWLIB.NET

Автор: Boris Zlotin
Издательство: Издательские решения
Серия:
Жанр произведения:
Год издания: 0
isbn: 9785006423817
Скачать книгу
>PANN: A New Artificial Intelligence Technology

      Tutorial

      Boris Zlotin

      Vladimir Matsenko

      Editor Anatol Guin

       Progress, Inc

      © Boris Zlotin, 2024

      © Vladimir Matsenko, 2024

      © Progress, Inc, 2024

      ISBN 978-5-0064-2381-7

      Created with Ridero smart publishing system

      Keywords:

      • unique properties,

      • transparency of functioning,

      • simple mathematical model,

      • low cost of implementation and use.

      From the authors

      The authors, Boris Zlotin, the developer of the theoretical foundations of the PANN and software products based on it, and Vladimir Matsenko, an implementer of these products and participant in the creation and testing of the theory, express their gratitude to those who helped in this work and made a substantial creative contribution:

      • Dmitry Pescianschi, the founder of the general idea of a new approach to neural network design.

      • Vladimir Proseanic, Anatol Guin, Sergey Faer, Oleg Gafurov, and Alla Zusman, who actively supported the development of PANN with their experience and knowledge in the Theory of Inventive Problem Solving (TRIZ) and their talents.

      • Ivan Ivanovich Negreshny – for his constructive criticism, which helped the authors recognize and correct their shortcomings.

      Part 1. A New Kind of Neural Network: Progress Artificial Neural Network (PANN)

      1. Introduction to the Problem

      Where did neural networks come from, and why are we unsatisfied with them?

      The development of artificial neural networks began with the work of Turing, McCulloch, Pitts, and

      Hebb. Based on their ideas, in 1958, Frank Rosenblatt created the first artificial neural network,

      «Perceptron,» capable of recognizing and classifying different objects based on recognition after appropriate training. Unfortunately, the very concept of the perceptron was fraught with a critical flaw, based on then-prevailing Dale’s biological doctrine: «…A neuron uses one and only one neurotransmitter for all synapses.» This doctrine was transferred to all artificial neural networks through a rule: «…One artificial synapse uses one and only one synaptic weight.» This rule may be called the Rosenblatt Doctrine.

      In the 1970s, Dale’s doctrine was rejected by biological science. Unfortunately, Rosenblatt’s doctrine remains unchanged for all neural networks (recurrent, resonant, deep, convolutional, LSTM, generative, forward, and backward error propagation networks). This doctrine makes it possible to train networks using an iterative approach known as the gradient descent method, which requires enormous computation. And it is precisely this doctrine that is «to blame» for the inability to construct an adequate working theory of neural networks. Also, these networks are characterized by opacity and incomprehensibility, relatively low training speed, difficulty in completing training, and many other innate problems. For more information on the issues of classical neural networks, see Appendix 1.

      Therefore, the development of such networks is mainly by trial and error. This leads to complexity and low reliability, the need for costly equipment, conducting complex power-hungry calculations, and expensive manual labor to provide training.

      The critical «Rosenblatt error» was discovered by researchers (TRIZ specialists) of the deep tech company Progress, Inc. They also found a solution to eliminate this error. Thus, it became possible to create a fundamentally new type of neural network called PANN (Progress Artificial Neural Network). PANN networks and their operations are transparent, predictable, and thousands of times less costly, providing a better solution to many intelligent tasks. Eighteen patents in many countries worldwide protect PANN’s designs and operations. Several new software versions have already been created and tested based on these concepts.

      2. Scientific and technical foundations of the PANN network

      In this chapter, we will describe the main design features and the theoretical basics of the PANN network.

      PANN differs from classical neural networks in that it has a unique design for the main element: the so-called formal neuron. A new formal neuron allows for a different way of training. As a result:

      1. The network operation has become completely transparent. Establishing a simple and straightforward theory that predicts the results of actions has become possible.

      2. PANN can be implemented on low-cost hardware. Its training and operation costs are much lower than those of classical neural networks.

      3. PANN trains many times faster than classical neural networks.

      4. PANN can be trained to additional (new) data anytime.

      5. PANN does not have the harmful effect of «overfitting.»

      2.1. A NEW DESIGN OF THE FORMAL NEURON

      Classical neural networks are built of typical «bricks» – formal neurons of simple design, described by McCulloch and Pitts and implemented by Rosenblatt. The main problem with neural networks is the poor design of this formal neuron.

      A formal Rosenblatt neuron has one synaptic weight. The PANN’s unique feature is a formal Progress neuron with two or more synaptic weights at each synapse.

      Fig. 1. Comparison of formal neurons

      In the Progress neuron, as in the Rosenblatt neuron, input signals travel to the adder through a single synaptic weight. However, in the Progress neuron, the distributor selects the weight based on the input signal size.

      The main characteristics that describe the Progress neuron are:

      • The Progress neuron operates with images presented as numerical (digital) sequences. These can be pictures, films, texts, sound recordings, tables, charts, etc.

      • Each Progress neuron is connected to all network inputs. The number of inputs equals the number of digits in the digital sequence (image). For images in raster graphics, this is the number of pixels. For example, at a resolution of 16 × 16, the number of inputs I = 256; at a resolution of 32 × 32, the number of inputs I = 1024.

      • The number of synaptic weights of the Progress neuron is at least two. When working with black-and-white graphics and simple tables, it is possible to use only two weights («0» and «1»). When working with color pictures, you can use any graphical representation, for example, palettes of 2, 4, 8, 16, 256, and so on. It should be noted that for the effective recognition of different types of images, there are optimal palettes, which are easy to determine by simple testing. At the same time, an unexpected property of PANN appears: the optimal number of colors for recognition is usually small; in experiments, this number was generally between 6 and 10.

      • The number of inputs is the number of members of the digital sequence in question; for images in raster graphics, the number of pixels must be the same for all images under consideration. For example, at a resolution of 16 × 16, the number of inputs is I = 256; at a resolution of 32 × 32, the number of inputs is I = 1024. You can use any aspect ratio of rectangular images when working with images. It should be noted that for the effective recognition of different types of images, there are their own optimal resolutions, which are easy to determine with simple testing. At the same time, an unexpected property of PANN manifests itself – the optimal number of pixels for recognition is usually small; for example, for the recognition of various kinds of portraits, the best resolution can be 32 × 32.

Скачать книгу