Simplex Method for Canonical LP Linear Programs and Simplex Method
Equations (5.10) and (5.11) shows the Simplex tableau contains every information to perform Algorithm 5.1. If the reduced cost ¯cj = cj − cTBB−1A·j ≥ 0 for every nonbasic variable, the current BFS ¯x is optimal.
Otherwise, choose arbitrary xs with ¯cs< 0. If ¯A·s ≤ 0, then the objective is unbounded below. Terminate. Otherwise, compute ratios ¯bi/(¯ais) for i’s with ¯ais > 0. Choose any row, say i0 of the minimum ratio ∆ and let xr
be the corresponding basic variable. Pivot on ¯Ai0·s to enter xs and drop xr
to get new BFS ˆx. This completes a single iteration of 5.1.
Exercise 5.6
The objective will decrease by ∆ × ¯cr.
The new basis B0 is obtained by replacing A·r of B by A·s. Of course, pivoting on ¯Ai0s of (5.12) and multiplying the inverse of B0 to (5.7) will result in the same Simplex tableau of new BFS ˆx.
Simplex Method for Canonical LP Linear Programs and Simplex Method
Example 5.7 Consider LP
min z = −2x1 +x2 −x3
sub.to −x1 +x2 +x3 = 1
2x1 +x2 −x3 +x4 = 2
x1 ≥ 0 x2 ≥ 0 x3 ≥ 0 x4≥ 0
(5.13)
whose objective is transformed into the form of a constraint:
min −z −2x1 +x2 −x3 = 0
sub.to −x1 +x2 +x3 = 1
2x1 +x2 −x3 +x4 = 2
x1 ≥ 0 x2 ≥ 0 x3≥ 0 x4 ≥ 0.
(5.14)
Simplex Method for Canonical LP Linear Programs and Simplex Method
Suppose we know B = [A·3, A·4] is a feasible basis. Then by pivot operations on A13 and ¯A24 in turn we get
min −z −3x1 +2x2 = 1
sub.to −x1 +x2 +x3 = 1
x1 +2x2 x4 = 3
x1 ≥ 0 x2 ≥ 0 x3≥ 0 x4 ≥ 0.
(5.15)
The objective function is now z = −3x1 + 2x2 −1. This is the same as the original objective function z = −2x1 +x2 −x3
on the feasible solution set. Since x1 = x2 = 0, it shows the current objective value is −1.
Simplex Method for Canonical LP Linear Programs and Simplex Method
We can bring about the same effects by multiplying the inverse matrix of
B =˜
1 c3 c4 0 A·3 A·4
=
1 −1 0
0 1 0
0 −1 1
, B˜−1=
1 1 0
0 1 0
0 1 1
.
1 1 0
0 1 0
0 1 1
Basis −z x1 x2 x3 x4 RHS
−z 1 −2 1 −1 0 0
− 0 −1 1 1 0 1
− 0 2 1 −1 1 2
=
Basis −z x1 x2 x3 x4 RHS
−z 1 −3 2 0 0 1
x3 0 −1 1 1 0 1
x4 0 1 2 0 1 3
(5.16)
Simplex Method for Canonical LP Linear Programs and Simplex Method
Then we can apply the Simplex method to (5.16). Only x1 is eligible to enter the basis. The minimum ratio 3 is attained in row 2. Hence x4 is leaving variable. Pivot on ¯A13= 1 to get
Basis −z x1 x2 x3 x4 RHS
−z 1 -3 2 0 0 1
x3 0 -1 1 1 0 1
x4 0 1 2 0 1 3
−z 1 0 8 0 3 10
x3 0 0 3 1 1 4
x1 0 1 2 0 1 3
Ratios
∞ 3
In the second tableau, the reduced costs are all nonnegative. Hence the new BFS = [0 3 0 4]T is optimal.
Simplex Method for Canonical LP Linear Programs and Simplex Method
Example 5.8
Let’s try a little larger LP, a maximization problem. Only difference is to choose as an entering variable, a variable with a positive reduced cost.
max z = 3x1 +2x2
sub.to 2x1 +x2 ≤ 100
x1 +x2 ≤ 80
x1 ≤ 40
x1≥ 0 x2≥ 0.
Standardization.
max z = 3x1 +2x2
sub.to 2x1 +x2 +x3 = 100
x1 +x2 +x4 = 80
x1 +x5 = 40
Simplex Method for Canonical LP Linear Programs and Simplex Method
Basis −z x1 x2 x3 x4 x5 RHS
−z 1 3 2 0 0 0 0
x3 0 2 1 1 0 0 100
x4 0 1 1 0 1 0 80
x5 0 1 0 0 0 1 40
−z 1 0 2 0 0 -3 -120
x3 0 0 1 1 0 -2 20
x4 0 0 1 0 1 -1 40
x1 0 1 0 0 0 1 40
−z 1 0 0 -2 0 1 -160
x2 0 0 1 1 0 -2 20
x4 0 0 0 -1 1 1 20
x1 0 1 0 0 0 1 40
−z 1 0 0 -1 -1 0 -180
x2 0 0 1 -1 +2 0 60
x5 0 0 0 -1 1 1 20
x1 0 1 0 1 -1 0 20
Simplex Method for Canonical LP Linear Programs and Simplex Method
The BFSs correspond to the vertices of the polyhedron in the original x1-x2space.
Exercise 5.9
Possible cases in tableau Simplex Method for Canonical LP Linear Programs and Simplex Method
An optimal solution exists
This is when the all reduced costs ¯cj≥ 0.
Example 5.10
Unique optimal solution: As in Example 5.8, if the reduced costs of nonbasic variables are all positive, the current BFS is a unique optimal solution. It is left to students to show this by CS theorem.
Example 5.11
Multiple optimal solutions If there is a nonbasic variable, of an optimal BFS
¯
x, with reduced cost 0, it can be entered to basis to obtain a BFS ˆx with the same objective value as the optimal solution. Hence if the minimum ratio is positive, ˆx is an alternative optimal solution. Actually, every point on the edge [¯x, ˆx] is an optimal solution.
Possible cases in tableau Simplex Method for Canonical LP Linear Programs and Simplex Method
Compute an optimal edge of the LP
Basis −z x1 x2 x3 x4 x5 x6 x7 RHS
−z 1 60 35 20 0 0 0 0 0
x4 0 8 6 1 1 0 0 0 48
x5 0 4 2 3/2 0 1 0 0 20
x6 0 2 3/2 1/2 0 0 1 0 8
x7 0 0 1 0 0 0 0 1 5
−z 1 0 -10 5 0 0 -30 0 -240
x4 0 0 0 -1 1 0 -4 0 16
x5 0 0 -1 1/2 0 1 -2 0 4
x1 0 1 3/4 1/4 0 0 1/2 0 4
x7 0 0 1 0 0 0 0 1 5
−z 1 0 0 0 0 -10 -10 0 -280
x4 0 0 -2 0 1 2 -8 0 24
x3 0 0 -2 1 0 2 -4 0 8
x1 0 1 5/4 0 0 −1/2 3/2 0 2
x7 0 0 1 0 0 0 0 1 3
−z
Possible cases in tableau Simplex Method for Canonical LP Linear Programs and Simplex Method
Unbounded objective value
Basis x1 x2 x3 x4 x5 x6 RHS
−z 36 30 -3 -4 0 0 0
x5 1 1 -1 0 1 0 5
x6 6 5 0 -1 0 1 10
−z 0 0 -3 2 0 -6 -60
x5 0 1/6 -1 1/6 1 −1/6 10/3
x1 1 5/6 0 −1/6 0 1/6 5/3
−z 0 -2 9 0 -12 -4 -100
x4 0 1 -6 1 6 -1 20
x1 1 1 -1 0 1 0 5
In the last tableau, entering x3 improves the objective function. Since ¯ai3
≤ 0 ∀ i, x3 can increase arbitrarily so that the objective diverges to −∞.
x(λ) = (5, 0, 0, 20, 0, 0)T + λ (1, 0, 1, 6, 0, 0)T , z(λ) = 100 + 9λ λ ≥ 0.
Possible cases in tableau Simplex Method for Canonical LP Linear Programs and Simplex Method
Exercise 5.12
Consider the following linear system.
x1 +x2 −x3 ≤ 5
6x1 +5x2 −x4 ≤ 10
x1 ≥ 0 x2≥ 0 x3 ≥ 0 x4 ≥ 0
(1) Compute an optimal solution when the objective is max z = 3x1+52x2− 3x3−13x4.
(2) Repeat for the objective max z = 36x1+ 30x2− 3x3− 4x4.
Possible cases in tableau Simplex Method for Canonical LP Linear Programs and Simplex Method
Exercise 5.13
Consider a minimization LP with initial basic variables xTB = [x2, x3,x1].
Basis −z x1 x2 x3 x4 x5 x6 x7 RHS
−z 1 0 0 0 δ 3 γ ξ 0
x2 0 0 1 0 α 1 0 3 β
x3 0 0 0 1 −2 2 θ −1 2
x1 0 1 0 0 0 −1 2 1 3
Compute the ranges of parameters satisfying each of the following conditions.
(1) We can apply the Simplex method to the current tableau.
(2) The current basic solution is feasible but not optimal.
(3) The objective value z is unbounded below.
(4) The current solution is a BFS. x6is eligible to enter and if so x3 will be dropped from basis.
(5) The current solution is a BFS. x7is an eligible entering variable but does not change the solution.
(6) The row with RHS β makes the problem infeasible.
Simplex Method for Canonical LP Linear Programs and Simplex Method
Canonical Simplex method: Summary
Two phase Simplex method Linear Programs and Simplex Method
Now we extend the Simplex method, Algorithm 5.1, to solve a standard LP in general. It amounts to adding a preliminary phase to secure a BFS to Algorithm 5.1. The idea will be explained through an example.
Example 6.1
max z = 2x1 +3x2
sub. to 12x1 +14x2 ≤ 4
x1 +3x2 ≥ 20
x1 +x2 = 10
x1≥ 0 x2 ≥ 0.
Standardization
max z = 2x1 +3x2
sub. to 12x1 +14x2 +x3 = 4
x1 +3x2 −x4 = 20
x1 +x2 = 10
x1≥ 0 x2≥ 0 x3 ≥ 0 x4≥ 0.
Not canonical; it does not own a full set of basic variables.
Two phase Simplex method Linear Programs and Simplex Method
To convert it into a canonical form, we just insert as many variables, called artificial variables (/BNú), as necessary; one for each of the last two constraints in the case.
max z = 2x1 +3x2
sub. to 1/2x1 +1/4x2 +x3 = 4
x1 +3x2 −x4 +x5 = 20
x1 +x2 +x6 = 10
x1≥ 0 x2≥ 0 x3≥ 0 x4≥ 0 x5≥ 0 x6≥ 0.
Unlike a standardization, the conversion is not about an equivalence. Its utility lies in the very fact that the Simplex method is applicable to it. If we could make the artificial variables nonbasic during an application of the Simplex method, we would have a full set of basic variables from the original variables.
Two phase Simplex method Linear Programs and Simplex Method
Naturally, we consider an LP, called phase I problem, of minimizing the sum of artificial variables on the constraints.
min w = x5 +x6
sub. to 12x1 +14x2 +x3 = 4
x1 +3x2 −x4 +x5 = 20
x1 +x2 +x6 = 10
x1≥ 0 x2≥ 0 x3≥ 0 x4≥ 0 x5≥ 0 x6≥ 0.
Suppose the original problem is feasible. Then the phase I problem also has a feasible solution in which the artificial variables are all zero. Hence its optimal objective value is 0.
Conversely, if the optimal objective value of the phase I problem is 0, each artificial variable should be 0 and the values of the remaining variables form a feasible solution of the original problem.
Theorem 6.2
A linear program is feasible if and only if the optimal objective value of the phase I problem is 0.
Two phase Simplex method Linear Programs and Simplex Method
Example 6.3
We continue with the example. To complete the canonical form, we eliminate the artificial variables from the objective row to get
Basis x1 x2 x3 x4 x5 x6 RHS
−w 0 0 0 0 1 1 0
x3 1/2 1/4 1 0 0 0 4
x5 1 3 0 -1 1 0 20
x6 1 1 0 0 0 1 10
−w -2 -4 0 1 0 0 -30
x3 1/2 1/4 1 0 0 0 4
x5 1 3 0 -1 1 0 20
x6 1 1 0 0 0 1 10
−w 0 -3 4 1 0 0 -14
x1 1 1/2 2 0 0 0 8
x 0 5/2 -2 -1 1 0 12
Two phase Simplex method Linear Programs and Simplex Method
−w 0 0 -8 1 0 6 -2
x1 1 0 4 0 0 -1 6
x5 0 0 8 -1 1 -5 2
x2 0 1 -4 0 0 2 4
−w 0 0 0 0 1 1 0
x1 1 0 0 1/2 −1/2 3/2 5
x3 0 0 1 −1/8 1/8 −5/8 1/4
x2 0 1 0 −1/2 1/2 −1/2 5
Since the optimal value of phase I problem is 0, the original problem is feasible. The artificial variables are all zero in the optimal tableau, we have obtained a BFS of the original problem.
Two phase Simplex method Linear Programs and Simplex Method
Since we have secured a BFS, we can start phase II of solving the original problem. To obtain a complete canonical form, we need to eliminate the basic variables from the objective row. (We call it “pricing out.”)
Basis x1 x2 x3 x4 RHS
−z 2 3 0 0 0
x1 1 0 0 1/2 5
x3 0 0 1 −1/8 1/4
x2 0 1 0 −1/2 5
−z 0 0 0 1/2 -25
x1 1 0 0 1/2 5
x3 0 0 1 −1/8 1/4
x2 0 1 0 −1/2 5
−z
Two phase Simplex method Linear Programs and Simplex Method
Remark 6.4
Even when phase I terminates with a zero optimal value, artificial variables may remain as basic variables. If there is no nonzero coefficient in a row corresponding to an artificial basic variable, the original row is redundant and can be eliminated. Otherwise, we can pivot on such an element to replace the artificial basic variable with an original variable. This does not change the phase I objective value. Repeating the procedure, if necessary, we can obtain a BFS of original variables.
Exercise 6.5
Determine if the following LP is feasible.
x1 −x2 +x3 = 4
2x1 +x2 +4x3 = 7
x1≥ 0 x2≥ 0 x3≥ 0.
Two-phase simplex method: summary Two phase Simplex method Linear Programs and Simplex Method
Principles of interior point method* Linear Programs and Simplex Method
Duality offers the design principles of algorithms for solving LP. For instance, the Simplex method maintains the feasibility of basic solutions and pursues dual feasibility.
Now we discuss the principle of interior point method using duality. In interior point method is a term referring collectively to the algorithms which generate a sequence of interior feasible points of polyhedron convergent to a point satisfying the complementary slackness condition.
In 1984, Karmakar invented an interior point method theoretically efficient, namely, whose computation time is bounded by a polynomial function of the problem size. Subsequently, many variants are proposed to accelerate it theoretically as well as practically. Here we discuss briefly a
logarithmic-barrier-method interpretation of interior point method.
Principles of interior point method* Linear Programs and Simplex Method
Recall the primal-dual pair of standard LP.
min cTx (7.17)
sub.to Ax = b x ≥ 0,
max yTb (7.18)
sub.to yTA + sT = cT s ≥ 0.
We assume (7.17) and (7.18) have interior feasible solutions x and (y, s) such that x > 0 and s > 0. (We can transform an LP to satisfy the assumption.)
Principles of interior point method* Linear Programs and Simplex Method
The complementary slackness condition is
Ax = b, (7.19)
x ≥ 0, ATy + s = c,
s ≥ 0, xTs = 0.
Thus every algorithm for solving LP needs to pursue x and (y, s) satisfying (7.19).
Principles of interior point method* Linear Programs and Simplex Method
We now consider the problems (7.17) and (7.18).
min cTx − µPn
j=1log xj (7.20)
sub.to Ax = b max yTb + µPn
j=1log sj (7.21)
sub.to yTA + sT = cT
For a fixed µ > 0, the second term of (7.20) grows fast if an xj approaches 0 from interior. During minimization of (7.20), the term, called a logarithmic barrier, enforces the solution to stay in the positive orthant. Also, we can see the smaller µ ↓ 0 gets, the closer the problem gets to the LP.
Principles of interior point method* Linear Programs and Simplex Method
By standard nonlinear program optimality conditions, for each µ > 0, x(µ), and (y(µ), s(µ)) are optimal solutions of (7.20) and (7.21), resp. if and only if the following conditions hold, where X(µ) and S(µ) denote the diagonal matrices with diagonal elements x(µ) and s(µ), resp. Also e is a vector of 1’s.
Ax(µ) = b, (7.22)
x(µ) ≥ 0, ATy(µ) + s(µ) = c,
s(µ) ≥ 0, X(µ)S(µ) = µe.
It is worth noting that (7.22) reduces to (7.19) when µ = 0. We can show indeed that x(µ) is unique for each for each µ > 0 and
lim
µ→0+
x(µ) = x∗.
Intuitively, this is due to that the weight of log-barrier term gets smaller as µ ↓ 0.
Similarly,
lim
µ→0+y(µ) = y∗, lim
µ→0+s(µ) = s∗.
Principles of interior point method* Linear Programs and Simplex Method
The parameterized setn
(x(µ)T, y(µ)T, s(µ)T)T | µ ≥ 0o
. is called the central path.
Proposed algorithms vary in how to trace the central path to reach an optimal solution.
Dantzig’s story Linear Programs and Simplex Method
G. B. Dantzig, the inventor of the Simplex method, was a graduate student at UC Berkeley. Arrived late in a class of Prof. Jerzy Neyman, he saw two problems written down on blackboard which he thought a homework. He solved and turn them in in the next class. Prof. Neyman with a surprise, rushed to Dantzig’s place on the Saturday. The problems, Dantzig had mistaken as a homework, were open problems of statistics.
A similar story was about the mathematician, Delbert Fulkerson who had failed to prove so called Perfect Graph Conjecture despite some years’ endeavor. One day a colleague phoned him that the conjecture had been established very recently by a young mathematician named L´aszl´o Lov´asz. Just that. But, after hanging up phone, he could find soon his own proof of the conjecture.
Nonlinear Programs
Nonlinear Programs
Sung-Pil Hong
Management Science/Optimization Lab IE department
Seoul National University
4th June 2018
Optimization Nonlinear Programs
Example 1.1
We connect with free joints five pieces of metal wire into a chain and hang it on a wall so that two ends are positioned at two points of the same height. What kind of a shape will the chain form?
Optimization Nonlinear Programs
An optimization problem
Decision variables : The coordinate of six nodes, (x1, y1), . . ., (x6, y6).
Constraints : Position of two ends; distance between two consecutive nodes is equal to the length of corresponding piece.
Objective : Minimization of the potential energy of the chain.
Input data : Lengths of 5 metal pieces 26.5, 50.5, 34, 19.5, 41.5 cm, the horizontal distance between two end points, 120 cm.
Optimization Nonlinear Programs
Optimization model
min 26.5 × y1+y2 2 + 50.5 × y2+y2 3 + 34 ×y3+y2 4 +19.5 × y4+y2 5 + 41.5 ×y5+y2 6
sub. to (x1− x2)2+ (y1− y2)2≤26.52 (x2− x3)2+ (y2− y3)2≤50.52 (x3− x4)2+ (y3− y4)2≤342 (x4− x5)2+ (y4− y5)2≤19.52 (x5− x6)2+ (y5− y6)2≤41.52 (x1, y1) = (0, 0)
(x6, y6) = (120, 0)
(1.1)
≤ a convexification
Optimization Nonlinear Programs
Optimal solution.
Optimization Nonlinear Programs
Optimization
max (sup) or min (inf) f (x)
s.t. gi(x) ≤ or ≥ 0, i = 1, . . . , m, hj(x) = 0, j = 1, . . . , p.
(1.2)
Feasible solution(0pxK) Optimal solution (þj&hK)
Local optima (²DGt þj&hK) Optimal in a neighborhood.
Optimization Nonlinear Programs
max f (x) = x1x2
sub. to g1(x) = x1 +2x2− 10 ≤ 0 g2(x) = x21 −x2 ≤ 0
x ≥ 0
(1.3)
Gradient vectors Nonlinear Programs
Definition 2.1
The gradient(lÖ¦l 7') of a function f at x = ¯x is defined as
∇f (¯x) =
∂f
∂x1(¯x) ...
∂f
∂xn(¯x)
. (2.4)
Gradient vectors Nonlinear Programs
The gradient ∇f (x) = [1, 1]T of the linear functional f (x1, x2) = x1+ x2 is the direction into which f increases fastest. It is normal to the contour
Gradient vectors Nonlinear Programs
In general, the gradient ∇f (¯x) of a real-valued function f (x) at x = ¯x is the same as the gradient of the linear function whose graph is the plane tangent to the graph of f (x) at (x, f (x)). The instantaneous rate of increase of the function at x = ¯x is largest in the direction of ∇f (¯x) and is equal to k∇f (¯x)k2.
Gradient vectors Nonlinear Programs
Level sets of f (x) = x1+ 2x2 and f (x) = x21+ 2x22.
Suppose the inner product y and f (¯x) is negative. If f is linear, it decreases at a constant rate along the line from ¯x into the direction y. A nonlinear function does not necessarily decreases at a constant rate. But, there is an open interval
Descent and ascent directions Nonlinear Programs
Definition 3.1
A vector y is said to be a descent direction from ¯x if ∃ ¯λ > 0 : f (¯x + λy) < f (¯x) ∀ 0 < λ < ¯λ.
In the figure, we can see every direction from ¯x having a negative inner product with ∇f (¯x) is a descent one.
A little bit of calculus - Chain rule Nonlinear Programs
The derivative Df (¯x) of a function f : R3→ R2, x = [x1, x2, x3]T 7→ f (x) = [f1(x), f2(x)]T at x = ¯x is defined as
Df (¯x) =
" ∂f
1
∂x1(¯x) ∂x∂f1
2(¯x) ∂f∂x1
3(¯x)
∂f2
∂x1(¯x) ∂x∂f2
2(¯x) ∂f∂x2
3(¯x)
#
, (4.5)
a linear transformation R3→ R2.
The derivative of linear functional cTx is cT. The derivative of a general linear function f (x) = Ax is A.
In general Df (¯x) is the linear approximation of f around x = ¯x whose error decreases faster than the distance from ¯x: kf (¯x + y) − f (¯x) −Df (¯x)yk2 = o(kyk2).
Hessian (Krîß) The derivative D(∇f)(¯x) of the function x 7→ ∇f (x) at x = ¯x is called the Hessian of f at x = ¯x.
∇2f (¯x) =
∂2f
∂x1∂x1(¯x) · · · ∂x∂n2∂xf
1(¯x) ..
. ...
(4.6)
A little bit of calculus - Chain rule Nonlinear Programs
Proposition 4.1
If h : Rn→ Rmand g : Rm→ Rpare diffrentiable, their composition f := g ◦ h : Rn→ Rp, x 7→ g(h(x)) is also differentiable and
Df (x) = D(g ◦ h)(x) = Dg(h(x))Dh(x).
f : R3 → R, x = [x1, x2, x3]T, Df (x) = [∂x∂f
1,∂x∂f
2,∂x∂f
3] ∈ R1×3.
g : R → R3, t 7→ [g1(t), g2(t), g3(t)]T, Dg(t) = [g01(t), g20(t), g30(t)]T ∈ R3×1. h := f ◦ g, t 7→ (f ◦ g)(t) = f (g1(t), g2(t), g3(t)). Then
h0(t) = Df (g(t))Dg(t)
= [∂x∂f
1(g(t)),∂x∂f
2(g(t)),∂x∂f
3(g(t))]
g10(t) g20(t) g30(t)
= ∇Tf (g(t))Dg(t).
(4.7)
In the case, g(t) = x + ty (x, y ∈ R3), Dg(t) = y and h0(t) = ∇Tf (x + ty)y. We call h0(0) = ∇Tf (x)y is the directional derivative of f at x into y.
A little bit of calculus - Chain rule Nonlinear Programs
Chain rule extends to any finite number of functions:
D(f ◦ g ◦ h)(x) = Df (g(h(x)))Dg(h(x))Dh(x).
Since h0(t) = ∇Tf (x + ty)y is the composition of the three maps t 7→ x + ty
| {z }
z
7→ ∇f (x + ty
| {z }
z
)
| {z }
w
7→ yT∇f (x + ty)
| {z }
w
,
the chain rule implies
h00(t) = yT∇2f (x + ty)y.
Descent direction A little bit of calculus - Chain rule Nonlinear Programs
Proposition 4.2
Every y such that ∇f (¯x)Ty < 0 is a descent direction.
Proof: We take it for granted for a function in a single variable. For a function f in x ∈ Rn, we consider g(λ) := f (¯x + λy), a function in λ ∈ R. Then by the chain rule,
g0(0) = ∇f (¯x)Ty > 0. (4.8) By the single-variable case, there is ¯λ > 0 such that
∀ 0 < λ ≤ ¯λ, f (¯x + λy) > f (¯x).
Exercise 4.3
(1) Define an ascent direction. Restate the proposition in ascent direction.
(2) Sketch the ascent directions of f (x) = (x1− 2x2)2 at x = (1, 1).
Descent direction A little bit of calculus - Chain rule Nonlinear Programs
Exercise 4.4
Compute the descent directions of the objective function from x0.
max 34x21 +x22 sub.to 2x1 −x2 ≥ 2,
2x1 +x2 ≥ 2, x1 +4x2 ≤ 19,
x1 ≤ 7,
x1 +5x2 ≥ −8.
Feasible direction Nonlinear Programs
Definition 5.1
If we can move from x ∈ F into the direction y for a positive distance maintaining feasibility, i.e. ∃ ¯λ > 0 such that x + λy ∈ F , ∀ 0 < λ < ¯λ, y is called a feasible direction of x.
If ¯x satisfies a constraint g(x) ≤ 0, where g is differentiable, with equality, any y such that ∇Tg(¯x)y < 0 is a feasible direction of ¯x.