Cath -
The reduced gradient for a nonlinear optimization problem is similar to
reduced cost for a linear problem. At the optimal solution of a linear
problem, for a variable not in the solution, i.e., for a changing cell equal
to zero, the reduced cost measures how much the target cell would change if
the currently-zero changing cell is forced to be one instead of zero. For a
nonlinear problem, at the stopping point of the algorithm, the reduced
gradient provides a similar measure. More technically, at the stopping point
of the algorithm, the reduced gradient is the partial derivative of the
objective function with respect to a decision variable.
The Lagrange multiplier for a nonlinear optimization problem is similar to
the shadow price for a linear problem. At the optimal solution of a linear
problem, the shadow price measures how much the target cell would change if
the right-hand-side of a constraint is increased by one unit, i.e., it's a
kind of marginal value. For a nonlinear problem, at the stopping point of
the algorithm, the Lagrange multiplier provides a similar measure.
- Mike Middleton
http://www.DecisionToolworks.com
Decision Analysis Add-ins for Excel