Convex Optimization. Mikhail Moklyachuk. Читать онлайн. Newlib. NEWLIB.NET

Автор: Mikhail Moklyachuk
Издательство: John Wiley & Sons Limited
Серия:
Жанр произведения: Математика
Год издания: 0
isbn: 9781119804086
Скачать книгу
href="#fb3_img_img_fe390cec-922e-5114-aa22-c8cfa8e814a9.jpg" alt="image"/> is the point of local minimum of the function f(x).

      The second condition of the theorem means that the matrix

image

      is positive definite.

      THEOREM 1.14.– (Sylvester’s criterion) A matrix A is positive definite if and only if its principal minors are positive. A matrix A is negative definite if and only if (–1)k det Ak > 0, where

image

      We write down the series of principal minors of the matrix A

image

      Then:

       – the matrix A is positive definite, if

       – the matrix A is negative definite, if

       – the matrix A is non-negative (non-positive) definite, ifand there exists j, such that Δj =0;

       – the matrix A is indefinite.

      EXAMPLE 1.1.– Find solutions for the optimization problem

image

      Solution. The function is continuous. Obviously, Smax = +∞. As a result of the Weierstrass theorem, the minimum is attained. The necessary conditions of the first order

image

      are of the form

image

      Solving these equations, we find the critical points (0, 0), (1, 1), (–1, –1). To use the second-order conditions, we compute matrices composed of the second-order partial derivatives:

image

      The matrix –A1 is non-negative definite. Therefore, the point (0, 0) satisfies the second-order necessary conditions for a maximum. However, a direct verification of the behavior of the function f in a neighborhood of the point (0, 0) shows that (0, 0) ∉ locextr f. The matrix A2 is positive definite. Thus, by theorem 1.13, the local minimum of the problem is attained at the points (1, 1), ( –1, –1).

      Answer. (0, 0) ∉ locextr; (1, 1), ( –1, –1) ∈ locmin.

      1.4.1. Problems with equality constraints

      Let fk : ℝn → ℝ, k = 0,1, … , m, be differentiable functions of n real variables. The constrained optimization problem (with equality constraints) is the problem

image

      holds true.

      The main method of solving constrained optimization problems is the method of indeterminate Lagrange multipliers. It is based on the fact that the solution of the constrained optimization problem [1.2] is attained at points that are critical in the unconstrained optimization problem

image

      where image is the Lagrange function, and λ0, … , λm are the Lagrange multipliers.

      THEOREM 1.15.– (Lagrange theorem) Let image be a point of local extremum of the problem [1.2], and let the functions fi(x), i = 0, 1, … , m, be continuously differentiable in a neighborhood U of the point image. Then there will exist the Lagrange multipliers λ0, … , λm, not all equal to zero, such that the stationary condition on x of the Lagrange function is fulfilled

image

      In order for λ0 ≠ 0, it is sufficient that the vectors image be linearly independent.

      To prove the theorem, we use the inverse function theorem in a finite-dimensional space.

      THEOREM 1.16.– (Inversefunction theorem) Let F1(x, … , xs), … , Fs(x1, ..., xs) be s continuously differentiable in a neighborhood U of the point image functions of s variables and let the Jacobian

image

      be not equal to zero. Then there exist numbers ε > 0, δ > 0, K > 0 such that for any y = (y1, … , ys), ∥y∥ ≤ ε we can find x = (x1, … , xs), which satisfies conditions ∥x∥ < δ, image, ∥x∥ ≤ Ky∥.

      PROOF.– We prove the Lagrange theorem by contradiction. Suppose that the stationarity condition

image

      does not hold, and vectors image, i = 0, 1, … , m, are linearly independent, which means that the rank of the matrix

image