Mathematical Programming for Power Systems Operation. Alejandro Garcés Ruiz. Читать онлайн. Newlib. NEWLIB.NET

Автор: Alejandro Garcés Ruiz
Издательство: John Wiley & Sons Limited
Серия:
Жанр произведения: Физика
Год издания: 0
isbn: 9781119747284
Скачать книгу
+ (U[1]-V[1])**2) P = [31,45] f = 12*dist(P,A) + 13*dist(P,B) + 11*dist(P,C) + 18*dist(P,D) print(f)

       nabla dist left parenthesis straight U comma straight V right parenthesis equals fraction numerator 1 over denominator dist invisible function application left parenthesis straight U comma straight V right parenthesis end fraction left parenthesis table attributes columnspacing 1em end attributes row cell straight u subscript 0 minus straight v subscript 0 end cell row cell straight u subscript 1 minus straight v subscript 1 end cell end table right parenthesis (2.37)

      then,

       table attributes columnalign right left right left right left right left right left right left columnspacing 0em 2em 0em 2em 0em 2em 0em 2em 0em 2em 0em end attributes row cell nabla straight f left parenthesis straight x right parenthesis equals end cell cell 12 nabla dist left parenthesis straight x comma straight A right parenthesis plus 13 nabla dist left parenthesis straight x comma straight B right parenthesis plus 11 nabla dist left parenthesis straight x comma straight C right parenthesis plus end cell row blank cell 18 nabla dist left parenthesis straight x comma straight D right parenthesis end cell end table (2.38)

      These functions are easily defined in Python as follows:

      def g_d(U,V): "gradient of the distance" return [U[0]-V[0],U[1]-V[1]]/dist(U,V) def grad_f(E): "gradient of the objective function" return 12*g_d(E,A)+13*g_d(E,B)+11*g_d(E,C)+18*g_d(E,D)

      Now the gradient method consists in applying the iteration given by (Equation 2.30), as presented below:

      t = 0.5

      E = np.array([10,10])

      for iter in range(50): E = E -t*grad_f(E) f = 12*dist(E,A) + 13*dist(E,B) + 11*dist(E,C) + 18*dist(E,D) print("Position:",E) print("Gradient",grad_f(E)) print("Cost",f)

      In this case, t = 0.5 and a initial point E = (10, 10) with 50 iterations were enough to find the solution. The reader is invited to try with other values and analyze the effect on the algorithm’s convergence.

      Example 2.7

      The convergence of the algorithm can be visualized by using the module MatplotLib as follows:

      import matplotlib.pyplot as plt t = 0.5 conv = [] E = [10,10] for iter in range(50): E += - t*grad_f(E) conv += [np.linalg.norm(grad_f(E))] plt.semilogy(conv) plt.grid() plt.xlabel("Iteration") plt.ylabel("|Gradient|") plt.show()

begin inline style ‖ nabla f ‖ less or equal than 10 to the power of negative 4 end exponent end style

      Figure 2.7 Convergence of the gradient method.

      Notice that addition was simplified by the statement +=. In general, an statement such as x=x+1 is equivalent to x+=1. More details about this aspect are presented in Appendix C.

      Reality imposes physical constraints into the problems and these constraints must be considered into the model. For example, an optimization model may include equality constraints, as presented below:

       table attributes columnalign right left right left right left right left right left right left columnspacing 0em 2em 0em 2em 0em 2em 0em 2em 0em 2em 0em end attributes row cell min blank end cell cell f left parenthesis x right parenthesis end cell row blank cell g left parenthesis x right parenthesis equals a end cell end table (2.39)

      For solving this problem, a function called lagrangian is defined as follows:

       calligraphic L left parenthesis x comma lambda right parenthesis equals f left parenthesis x right parenthesis plus lambda left parenthesis a minus g left parenthesis x right parenthesis right parenthesis (2.40)

      This new function depends on the original decision variables x and a new variable λ, known as Lagrange multiplier or dual variable. By means of the lagrangian function, a constrained optimization problem was transformed into an unconstrained optimization problem that can be solved numerically, namely:

       table attributes columnalign right left right left right left right left right left right left columnspacing 0em 2em 0em 2em 0em 2em 0em 2em 0em 2em 0em end attributes row cell fraction numerator straight partial differential calligraphic L over denominator straight partial differential x end fraction end cell cell equals fraction numerator straight partial differential f over denominator straight partial differential x end fraction minus lambda fraction numerator straight partial differential g over denominator straight partial differential x end fraction equals 0 end cell end table (2.41)

       table attributes columnalign right left right left right left right left right left right left columnspacing 0em 2em 0em 2em 0em 2em 0em 2em 0em 2em 0em end attributes row cell fraction numerator straight partial differential calligraphic L over denominator straight partial differential lambda end fraction end cell cell equals a minus g left parenthesis x right parenthesis equals 0 end cell end table (2.42)

      by a small abuse of notation, ∂f / ∂x is used instead of ∇f, which is the formal representation for the n-dimentional case, (the same for L and g). Notice that the optimal conditions of L imply optimal solution in f but also feasibility in terms of the constraint.

      The first condition implies that the gradient of the objective function must be parallel to the gradient of the constraint and, the Lagrange multiplier is the proportionality constant. Besides this geometrical interpretation, Lagrange multipliers have another interesting interpretation. Suppose a local optimum x~ is found for a constrained