Among those techniques, offloading based on a mathematical model is discussed in the next section.
2.3 Mathematical Model for Offloading
Mathematical offloading is a process of mathematical calculations and programming to compute offloading of tasks. In general words, mathematical offloading and modeling are converting physical world challenges into a mathematical form that would generate solutions for any application. The system model as such is divided into Stochastic and Deterministic process. The dependent factor of the system determines the parameters of the model. Being manageable to scientific investigations is the advantage of the deterministic model. The stochastic model is considered for having entities that produce probabilities concerning time [17]. Model for offloading where static in stochastic and deterministic is responsible for representing a system at a given time and dynamic in stochastic and deterministic is responsible for representing a system with respect to change in time. The dynamic model for stochastic and deterministic is divided into discrete and continuous. In discrete, the variables change at a discrete set of points with respect to time. In continuous dynamic, variables change with respect to time [18]. The stochastic dynamic-based system can be easily modeled and the optimization problem can be solved using the Markov chain model. There are various types of Markov chain models for dynamic problem solving, depicted in Figure 2.7, namely, i) Markov chain, Semi-Markov chain, Markov Decision, Hidden Markov, Queuing model. The section below describes these models in detail.
Figure 2.7 Markov chain-based stochastic process.
2.3.1 Introduction to Markov Chain Process and Offloading
Working of Markov Chain Model - To statistically model random processes, the Markov chain model is one of a simple way. Markov Chain model is also defined as “A stochastic process containing random variables, transitioning from one state to another depending on certain assumptions and definite probabilistic rules” [19]. They are widely used in applications from text generation to financial modeling and auto-completion applications.
Using Markov property, random variables transit from one state to another. Markov property states that the probability of changing from current to the possible future state by random process depends on the current state and time and is independent of the series of states that preceded it.
Figure 2.6 illustrates the Stochastic process, which is further classified into Markov Chain, Semi Markov, Markov Decision, Hidden Markov and Queuing model.
Markov chain describes a sequence of possible events where each event’s probability depends on the previous state event [20]. Semi-Markov model is an actual stochastic model that evolves with time by defining the state at every given time [21]. As the name itself implies for Markov Decision, it is a process of making decisions where partly random decisions are few and few are at the decision maker’s control [22]. In the Hidden Markov model, rather than being observables, the states are hidden [23]. The Queuing model helps predict the length of the queue and waiting time is predicted [23].
The Markov chain decision process is a mathematical model that evaluates the system during the transition from one state to another to the probability rules. It can be classified into two types. They are discrete-time Markov decision chains and continuous-time Markov decision chains. Additionally, it can be classified based on the number of states taken to decide the next state. It will be defined as the first-order, second-order, and higher-order chains. There are many offloading schemes based on the Markov decision process [22, 23].
2.3.1.1 Markov Chain Based Schemes
The Markov chain is the mathematical system that transitions from one state to another state. A transition matrix that defines the possibilities of transition from one state to another is used to tally the transitions [24]. The Markov chain is one of the critical stochastic processes. The present state grasps all the information and uses it for the process’s future evolution [25]. The Markov chain model comes under the stochastic offloading process [25]. Among many processes, the Markov chain model is chosen where dynamic decision making is the requirement considering environmental parameters like wireless channel conditions and computation load [24].
2.3.1.2 Schemes Based on Semi-Markov Chain
This section aims at the Semi-Markov chain based offloading. The Semi-Markov chain is developed using the Markov renewal process. It is defined by the renewal kernel and the initial distribution using other features that equal to the renewal kernel. Semi-Markov chain model differs from the Markov chain model by having a realization of the process of the defined state by a random process that evolves over time [26]. For the verification of security for an offloading scheme, a state transition model is used.
2.3.1.3 Schemes Based on the Markov Decision Process
The Markov decision process is a mathematical framework of a discrete-time stochastic process. It assists in decision making for models that are partly random and partly under the control of the decision-maker. By using dynamic programming, optimization challenges are worked out [27]. The Markov decision process contains a list of items, e.g., probabilities of transition, states, decision epochs, costs, and actions. The computation in the Markov decision process is very high concerning the increase in the number of states. These issues can be solved by different algorithms such as linear programming, value iteration algorithms.
2.3.1.4 Schemes Based on Hidden Markov Model
Hidden Markov is a partially observable statistical Markov model where the agent partially observes the states. These models involve “hidden” generative processes with “noisy” observations correlated to the system [28]. The Hidden Markov model-based schemes allow the system or device to accompany the processing latency, power usage, and diagnostics accuracy.
2.3.2 Computation Offloading Schemes Based on Game Theory
To model allocation problems for wireless resources, the Game theory is practiced. Game theory helps in reducing the resource allocation problem by dividing it into distributed decision-making problems. The main advantage of game theory is that it focuses on strategic interactions by eliminating the use central controller.
Game theory models are getting more attention as a source to address wireless communication problems day by day. A game theory model consists of a group of decision-making blocks. The users plan a group of strategies and after using the strategy and corresponding pay off produced.
Offloading mobile data can be expressed in 3 tuples, <T=A, B,C> where A is represented as a group of users, B={B1, B2,….. Bn} is the strategy space of the user and C={C1, C2,…..Cn} is the utilization of the user after an action. If Bi is the strategy chosen by single user i, then the remaining users chosen strategies can be represented as B-i then B = {Bi,B-i} is the strategy formed by the user. At an equilibrium level, a strategy formed by the user needs to be chosen. No user will rationally choose to switch from his selected approach, which leads to a decrease in utility.
Game theory models can be divided into two groups: (i) Cooperative game model, (ii) Non-Cooperative game model.
In