Optimizations and Programming. Abdelkhalak El Hami. Читать онлайн. Newlib. NEWLIB.NET

Автор: Abdelkhalak El Hami
Издательство: John Wiley & Sons Limited
Серия:
Жанр произведения: Техническая литература
Год издания: 0
isbn: 9781119818267
Скачать книгу
tableau is as follows:

image

      The third tableau is as follows:

image

      The unique optimal solution is given by x1 = 4, x2 = 0, x3 = 1, and z(x1, x2, x3) = −6.

      1.6.2. Auxiliary program or Phase I

      Suppose that the LP that we wish to solve is in the standard form:

image

      The auxiliary program method proceeds by letting M → +∞ in (PM) and solving the program obtained in the limit. Let

image image

      We therefore solve the auxiliary program (PA) given by

image

      The simplex algorithm can now be applied to (PA) starting from the basic realizable solution y = b.

       – First case: max z″ (y) < 0. There exists yi > 0, i ∈ {1, 2, . . . , m}. In this case, (P ) does not have any realizable solutions.

       – Second case: max z″ (y) = 0. The optimal solution of (PA) can be used as an initial basic realizable solution for applying the simplex algorithm to (P ). The objective function of (P ) is expressed in terms of non-basic variables, then the simplex algorithm is applied to (P ).

      REMARK.– For a minimization problem, we add images instead.

      Let us go into more detail for a minimization problem. The initial tableau of the auxiliary problem is as follows:

image

      where B = I, cB = (1, 1, 1, . . .)T, and the matrices A and b relate to the auxiliary program. For the original variables i, we have ci = 0.

      If (x*, y*) is a zero-cost optimal solution of the auxiliary problem, then the artificial variable in the basis is necessarily zero, so the solution is degenerate.

      Once the optimal tableau of the auxiliary problem has been obtained and all artificial variables have left the basis, we obtain the initial tableau of the original problem P1 by deleting any columns that relate to the artificial variables and calculating the reduced initial costs: images in the box reserved for the cost function. The following example shows how this method works.

      EXAMPLE 1.8.– Consider the following linear programming problem:

      [1.10]image

      We will first perform Phase I of the simplex algorithm. After adding artificial variables, we obtain the tableau as follows:

image image

      This tableau is optimal. The optimal solution is x1 = 1, x2 = 0, x3 = 0 and x4 = 1, giving an optimal value of z = −1.

      1.6.3. Degeneracy and cycling

      Suppose that the current basic admissible solution is degenerate (i.e. at least one basic variable is zero), and let us recall a possible unfavorable scenario that can occur when the simplex algorithm is iterated. Performing a change of basis yields a value for the cost function that is greater than or equal to the previous value. The method is only guaranteed to converge if z* < z0; in fact, if the previous basic solution is degenerate, it might be the case that z* = z0.

      If this phenomenon occurs multiple times over consecutive iterations, the algorithm can return to a basis that was already considered earlier. In other words, it returns to the same vertex twice. This is called cycling.

      Although degeneracy is a frequently encountered phenomenon in linear programming, cycling is not. Some examples have been given in the literature [GAS 75]. Despite the rarity of cycling, it would be desirable to specify a method that avoids it and therefore guarantees that the simplex method will converge. There are several possible choices for a rule that avoids the inconvenience of cycling.

      In the past, the most common technique was the so-called method of small perturbations. This method applies an infinitesimal geometric modification to the vector d in order to force it to be expressible as a linear combination of m vectors, with strictly positive coefficients in some basis. Each vertex is then defined as the intersection of n hyperplanes [HIL 90].

       – for the variable entering the basis, choose the smallest index j such that the reduced cost is negative;

       – for the variable leaving the basis, if there is equality in θ*, choose the variable with the lowest index.

      1.6.4. Geometric structure of realizable solutions

      Earlier, when we solved linear programs graphically, the optimal solutions were on the boundary of the convex set of realizable solutions. If there was a unique optimal solution, it was an extreme point.

      Recall the canonical form of a linear program:

image

      where the matrix A is assumed to be an m × n matrix of rank m (< n). The standard form is then given by

image

      It is possible to show the following result.

      Let Γ (respectively, Σ) be the set of feasible solutions of (P) (respectively of (Ps)):

Скачать книгу