Simulation and Analysis of Mathematical Methods in Real-Time Engineering Applications. Группа авторов. Читать онлайн. Newlib. NEWLIB.NET

Автор: Группа авторов
Издательство: John Wiley & Sons Limited
Серия:
Жанр произведения: Математика
Год издания: 0
isbn: 9781119785507
Скачать книгу

      1.1.2 Problem-Solving Techniques

      Several problem-solving tasks can be formulated as a state-space search. A state space is made up of all the domain states and a set of operators that transform one state into another. In a connected graph, the states can best be thought of as nodes and the operators as edges. Some nodes are designated as target nodes, and when a path from an initial state to a goal state has been identified, a problem is said to be solved. State spaces can get very big, and different search methods are necessary to monitor the effectiveness of the search [7].

      A) Problem Reduction: To make searching simpler, this strategy requires transforming the problem space. Examples of problem reduction include: (a) organizing in an abstract space with macro operators before getting to the real operator details; (b) mean-end analysis, which tries to reason backwards from a known objective; and (c) sub-goaling.

      B) Search Reduction: This approach includes demonstrating that the solution to the problem cannot rely on searching for a certain node. There are several explanations why this may be true: (a) There can be no solution in this node’s subtree. This approach has been referred to as “constraint satisfaction” and includes noting that the circumstances that can be accomplished in the subtree below a node are inadequate to create any minimum solution requirement. (b) In the subtree below this node, the solution in another direction is superior to any possible solution. (c) In the quest, the node has already been investigated elsewhere.

      D) Adaptive searching techniques: In order to extend the “next best” node, these strategies use assessment functions. The node most likely to contain the optimal solution will be extended by certain algorithms (A *). The node that is most likely to add the most information to the solution process will be expanded by others (B *).

      In the artificial learning area, the machine learning algorithm has brought about a growing change, knowledge that spoke of human discerning power in a splendid manner. There are various types of algorithms, the most common feature of which is grouping. Computer algorithm, logistic regression, naive bay algorithm, decision tree, enhanced tree, all under classification algorithms, random forest and k nearest neighbour algorithm support vector support. The classification process involves some predefined method that leads to the train data method of selection from the sample data provided by the user. Decision-making is the centre of all users, and the algorithm of classification as supervised learning stands out from the decision of the user.

      Machine learning (ML) and deep learning (DL) are common right now, as there is a lot of fascinating work going on there, and for good reason. The hype makes it easy to forget about more tried and tested methods of mathematical modelling, but that doesn’t make it easier to forget about those methods.

      We can look at the landscape in terms of the Gartner Hype Cycle:

      1.2.1 Tried and True Tools

      Let’s look at a couple of these advanced tools that continue to be helpful: the theory of control, signal processing, and optimization of mathematics.

      Signal processing, which deals with the representation and transformation of any signal, from time-series to hyper-spectral images, is another useful instrument. Classical transformations of signal processing, such as spectrograms and wavelet transforms, are also useful features to be used with ML techniques. These representations are currently used by many developments in speech ML as inputs to a deep neural network. At the same time, classical signal processing philtres, such as the Kalman philtre, are also very effective first solutions to issues that, with 20% of the effort, get you 80% of the way to a solution. Furthermore, strategies such as this are also much more interpretable than more advanced DL ones [9].

      Mathematical optimization, finally, is concerned with finding optimal solutions to a given objective function. Linear programming to optimise product allocation and nonlinear programming to optimise financial portfolio allocations are classical applications. Advances in DL are partly due to advances in the underlying optimization techniques that allow the training to get out of local minima, such as stochastic gradient descent with momentum.

      Mathematical optimization, as with other methods, is very complementary to ML. Both of these instruments do not work against each other, but provide interesting ways of combining them instead.

      1.2.2 Joining Together Old and New

      Many active solutions across different fields are used to combine the modern ML/DL environment with conventional mathematical modelling techniques. For instance, you can combine state-space modelling techniques with ML in a thermodynamic parameter estimation problem to infer unobserved system parameters. Or, you can combine ML-based forecasting of consumer behaviour with a broader mathematical optimization in a marketing coupon optimization issue to optimise the coupons sent.

      Manifold has extensive experience with signal processing interfaces and ML. Using signal processing for feature engineering and combining it with modern ML to identify temporal events based on these features is a common pattern we have deployed. Features inspired by multi-variate time series signal processing, such as short time short time Fourier Transform (STFT), exponential moving averages, and edge finders, allow domain experts to quickly encode information into the modelling problem. Using ML helps the device to learn from additional annotated data continuously and improve its output over time.

      1.2.3 Markov Chain