The use of min
and max
in a model make some derivatives discontinuous and the model type DNLP
needs to be used and solvers get stuck at the point with discontinuous derivatives. How can one find a a smooth approximation for max(x,0), and min(x,0)?
Here is the answer from Prof. Ignacio Grossmann (Carnegie Mellon University):
Use the approximation
f(x) := ( sqrt( sqr(x) + sqr(epsilon) ) + x ) / 2
for max(x,0)
, where sqrt
is the square root and sqr
is the square.
The error err(x) = abs(f(x)-max(x,0))
in the above approximation is maximized at 0 (the
point of non differentiability), where err(0) = epsilon/2
. As x goes to +/-
infinity, err(x)
goes to 0. One can shift the function so the error at 0 becomes 0 but takes on
epsilon/2 as x goes to +/- infinity:
g(x) := ( sqrt( sqr(x) + sqr(epsilon) ) + x - epsilon ) / 2
Because min(x,0) = -max(-x,0), you can use the above approximations for min(x,0) as well. Epsilon is a small positive constant.