The first two terms in the right-hand side of the equation vanishes, in view of the optimal conditions of x~; thus, the following expresion is obtained:
this means that λ is the variation of the lagrangian (and hence the objective function), for a small variation on the parameter a (see Chapter 3 for more details about dual variables).
Example 2.8
Consider the following optimization problem
If the problem were unconstrained, the solution would be x = 0, y = 0; however, the solution must fulfill the constraint x + y = 1. Therefore, a new function is defined as follows:
with the following optimal conditions
This is a linear system of equation with solution x = 1/2, y = 1/2, and λ = 1 that constitutes the optimum of the problem.
2.7 The Newton’s method
The optimal conditions for both constrained and unconstrained problems constitute a set of algebraic equations S(x) for the first case and S(x, λ) for the second case. This set can be solved using Newton’s method4.
Consider a set of algebraic equations S(x) = 0 where S :
n → n is continuous and differentiable. Thus, the following approximation is made:
where
This iteration is the primary Newton’s method. Compared to the gradient method, this method is faster since it includes information from the second derivative5. In addition, Newton’s method does not require defining a step t as in the gradient method. However, each iteration of Newton’s method is computationally expensive since it implies the formulation of a jacobian and solves a linear system in each iteration.
Example 2.9
Consider