Artificial Intelligence Glossarium: 1000 terms. Alexander Chesalov. Читать онлайн. Newlib. NEWLIB.NET

Автор: Alexander Chesalov
Издательство: Издательские решения
Серия:
Жанр произведения:
Год издания: 0
isbn: 9785005901644
Скачать книгу
large numbers of business analysts, data scientists, endusers, developers and operational systems across the organization, simultaneously creating, validating, using and deploying sophisticated analyses and mathematical models at scale.

      “B”

      Backpropagation (Обратное распространение ошибки) – Backpropagation, also called “backward propagation of errors,” is an approach that is commonly used in the training process of the deep neural network to reduce errors.

      Backpropagation through time (BPTT) (Обратное распространение во времени) – A gradient-based technique for training certain types of recurrent neural networks. It can be used to train Elman networks. The algorithm was independently derived by numerous researchers.

      Backward Chaining (Обратная цепочка (или обратное рассуждение)) – Backward chaining, also called goal-driven inference technique, is an inference approach that reasons backward from the goal to the conditions used to get the goal. Backward chaining inference is applied in many different fields, including game theory, automated theorem proving, and artificial intelligence [72].

      Bag-of-words model (Модель мешка слов) — A simplifying representation used in natural language processing and information retrieval (IR). In this model, a text (such as a sentence or a document) is represented as the bag (multiset) of its words, disregarding grammar and even word order but keeping multiplicity. The bag-of-words model has also been used for computer vision. The bag-of-words model is commonly used in methods of document classification where the (frequency of) occurrence of each word is used as a feature for training a classifier [73].

      Bag-of-words model in computer vision (Модель мешка слов в компьютерном зрении) — In computer vision, the bag-of-words model (BoW model) can be applied to image classification, by treating image features as words. In document classification, a bag of words is a sparse vector of occurrence counts of words; that is, a sparse histogram over the vocabulary. In computer vision, a bag of visual words is a vector of occurrence counts of a vocabulary of local image features.

      Baldwin effect (Эффект Балдвина) – the skills acquired by organisms during their life as a result of learning, after a certain number of generations, are recorded in the genome.

      Baseline (Базовый уровень) – A model used as a reference point for comparing how well another model (typically, a more complex one) is performing. For example, a logistic regression model might serve as a good baseline for a deep model. For a particular problem, the baseline helps model developers quantify the minimal expected performance that a new model must achieve for the new model to be useful.

      Batch (Пакет) – The set of examples used in one gradient update of model training.

      Batch Normalization (Пакетная нормализация) – A preprocessing step where the data are centered around zero, and often the standard deviation is set to unity.

      Batch size (Размер партии) – The number of examples in a batch. For example, the batch size of SGD is 1, while the batch size of a mini-batch is usually between 10 and 1000. Batch size is usually fixed during training and inference; however, TensorFlow does permit dynamic batch sizes.

      Bayes’s Theorem (Теорема Байеса) – A famous theorem used by statisticians to describe the probability of an event based on prior knowledge of conditions that might be related to an occurrence.

      Bayesian classifier in machine learning (Байесовский классификатор в машинном обучении) is a family of simple probabilistic classifiers based on the use of the Bayes theorem and the “naive” assumption of the independence of the features of the objects being classified.

      Bayesian Filter (Фильтрация по Байесу) is a program using Bayesian logic. It is used to evaluate the header and content of email messages and determine whether or not it constitutes spam – unsolicited email or the electronic equivalent of hard copy bulk mail or junk mail. A Bayesian filter works with probabilities of specific words appearing in the header or content of an email. Certain words indicate a high probability that the email is spam, such as Viagra and refinance [74].

      Bayesian Network (Байесовская сеть) – also called belief network, or probabilistic directed acyclic graphical model, is a probabilistic graphical model (a statistical model) that represents a set of variables and their conditional dependencies via a directed acyclic graph [75].

      Bayesian optimization (Байесовская оптимизация) – A probabilistic regression model technique for optimizing computationally expensive objective functions by instead optimizing a surrogate that quantifies the uncertainty via a Bayesian learning technique. Since Bayesian optimization is itself very expensive, it is usually used to optimize expensive-to-evaluate tasks that have a small number of parameters, such as selecting hyperparameters.

      Bayesian programming (Байесовское программирование) – A formalism and a methodology for having a technique to specify probabilistic models and solve problems when less than the necessary information is available.

      Bees algorithm (Алгоритм пчелиной колонии) — A population-based search algorithm which was developed by Pham, Ghanbarzadeh and et al. in 2005. It mimics the food foraging behaviour of honey bee colonies. In its basic version the algorithm performs a kind of neighbourhood search combined with global search, and can be used for both combinatorial optimization and continuous optimization. The only condition for the application of the bees algorithm is that some measure of distance between the solutions is defined. The effectiveness and specific abilities of the bees algorithm have been proven in a number of studies.

      Behavior informatics (BI) (Информатика поведения) — The informatics of behaviors so as to obtain behavior intelligence and behavior insights.

      Behavior tree (BT) (Дерево поведения) – A mathematical model of plan execution used in computer science, robotics, control systems and video games. They describe switchings between a finite set of tasks in a modular fashion. Their strength comes from their ability to create very complex tasks composed of simple tasks, without worrying how the simple tasks are implemented. BTs present some similarities to hierarchical state machines with the key difference that the main building block of a behavior is a task rather than a state. Its ease of human understanding make BTs less error-prone and very popular in the game developer community. BTs have shown to generalize several other control architectures [76].

      Belief-desire-intention software model (BDI) (Модель убеждений, желаний и намерений) — A software model developed for programming intelligent agents. Superficially characterized by the implementation of an agent’s beliefs, desires and intentions, it actually uses these concepts to solve a particular problem in agent programming. In essence, it provides a mechanism for separating the activity of selecting


<p>72</p>

Backward Chaining [Электронный ресурс] www.educba.com URL: https://www.educba.com/backward-chaining/ (дата обращения 11.03.2022)

<p>73</p>

Bag-of-words model [Электронный ресурс] // machinelearningmastery.ru URL: https://www.machinelearningmastery.ru/gentle-introduction-bag-words-model/ (дата обращения: 11.03.2022)

<p>74</p>

Bayesian Filter [Электронный ресурс] //certsrv.ru URL: http://certsrv.ru/eset_ss.ru/pages/bayes_filter.htm (дата обращения: 12.02.2022)

<p>75</p>

Bayesian Network [Электрчатонный ресурс] // dic.academic.ru URL: https://dic.academic.ru/dic.nsf/ruwiki/1738444 (дата обращения: 31.01.2022)

<p>76</p>

Behavior tree (BT) [Электронный ресурс] // habr.com URL: https://habr.com/ru/company/cloud_mts/blog/306214/ (дата обращения: 31.01.2022)