The third tableau is as follows:
The unique optimal solution is given by x1 = 4, x2 = 0, x3 = 1, and z(x1, x2, x3) = −6.
1.6.2. Auxiliary program or Phase I
Suppose that the LP that we wish to solve is in the standard form:
The auxiliary program method proceeds by letting M → +∞ in (PM) and solving the program obtained in the limit. Let
Maximizing z′ (x, y) is equivalent to maximizing z″ (x, y), or maximizing
We therefore solve the auxiliary program (PA) given by
The simplex algorithm can now be applied to (PA) starting from the basic realizable solution y = b.
– First case: max z″ (y) < 0. There exists yi > 0, i ∈ {1, 2, . . . , m}. In this case, (P ) does not have any realizable solutions.
– Second case: max z″ (y) = 0. The optimal solution of (PA) can be used as an initial basic realizable solution for applying the simplex algorithm to (P ). The objective function of (P ) is expressed in terms of non-basic variables, then the simplex algorithm is applied to (P ).
REMARK.– For a minimization problem, we add
Let us go into more detail for a minimization problem. The initial tableau of the auxiliary problem is as follows:
where B = I, cB = (1, 1, 1, . . .)T, and the matrices A and b relate to the auxiliary program. For the original variables i, we have ci = 0.
If (x*, y*) is a zero-cost optimal solution of the auxiliary problem, then the artificial variable in the basis is necessarily zero, so the solution is degenerate.
If we assume that the kth basic variable is artificial, consider the kth row of the tableau and select the element of this row in column j, where j is the index of a variable in the original problem. If the element is non-zero: k leaves the basis and j enters the basis, and we pivot the tableau. But what if no such element exists? This would mean that the matrix A does not have full rank, and the row in question corresponds to a redundant constraint.
Once the optimal tableau of the auxiliary problem has been obtained and all artificial variables have left the basis, we obtain the initial tableau of the original problem P1 by deleting any columns that relate to the artificial variables and calculating the reduced initial costs:
EXAMPLE 1.8.– Consider the following linear programming problem:
[1.10]
We will first perform Phase I of the simplex algorithm. After adding artificial variables, we obtain the tableau as follows:
There are no longer any artificial variables in the basis. We have found the optimal tableau of the auxiliary problem, and therefore an admissible basic solution for the initial problem. Since Phase I is now complete, delete the columns corresponding to the artificial variables and compute the reduced costs to obtain the following tableau:
This tableau is optimal. The optimal solution is x1 = 1, x2 = 0, x3 = 0 and x4 = 1, giving an optimal value of z = −1.
1.6.3. Degeneracy and cycling
Suppose that the current basic admissible solution is degenerate (i.e. at least one basic variable is zero), and let us recall a possible unfavorable scenario that can occur when the simplex algorithm is iterated. Performing a change of basis yields a value for the cost function that is greater than or equal to the previous value. The method is only guaranteed to converge if z* < z0; in fact, if the previous basic solution is degenerate, it might be the case that z* = z0.
If this phenomenon occurs multiple times over consecutive iterations, the algorithm can return to a basis that was already considered earlier. In other words, it returns to the same vertex twice. This is called cycling.
Although degeneracy is a frequently encountered phenomenon in linear programming, cycling is not. Some examples have been given in the literature [GAS 75]. Despite the rarity of cycling, it would be desirable to specify a method that avoids it and therefore guarantees that the simplex method will converge. There are several possible choices for a rule that avoids the inconvenience of cycling.
In the past, the most common technique was the so-called method of small perturbations. This method applies an infinitesimal geometric modification to the vector d in order to force it to be expressible as a linear combination of m vectors, with strictly positive coefficients in some basis. Each vertex is then defined as the intersection of n hyperplanes [HIL 90].
More recently, Bland [BLA 77] suggested modifying the rules for choosing the change of basis. Bland’s rule proceeds as follows:
– for the variable entering the basis, choose the smallest index j such that the reduced cost is negative;
– for the variable leaving the basis, if there is equality in θ*, choose the variable with the lowest index.
1.6.4. Geometric structure of realizable solutions
Earlier, when we solved linear programs graphically, the optimal solutions were on the boundary of the convex set of realizable solutions. If there was a unique optimal solution, it was an extreme point.
Recall the canonical form of a linear program:
where the matrix A is assumed to be an m × n matrix of rank m (< n). The standard form is then given by
It is possible to show the following result.
Let Γ (respectively, Σ) be the set of feasible solutions of (P) (respectively of (Ps)):