# GAMS Support Wiki

### Site Tools

gams:smooth_approximations_for_max_x_0_and_min_x_0

# Differences

This shows you the differences between two versions of the page.

 gams:smooth_approximations_for_max_x_0_and_min_x_0 [2007/08/10 11:07]127.0.0.1 external edit gams:smooth_approximations_for_max_x_0_and_min_x_0 [2020/05/20 18:17] (current)Michael Bussieck 2020/05/20 18:17 Michael Bussieck 2007/08/10 11:07 external edit 2020/05/20 18:17 Michael Bussieck 2007/08/10 11:07 external edit Line 1: Line 1: ====== Smooth approximations for MAX(X,0) and MIN(X,0) ====== ====== Smooth approximations for MAX(X,0) and MIN(X,0) ====== - Q: //Do you know a smooth approximation for max(x,0), and min(x,0)?// + The use of ''​min''​ and ''​max''​ in a model make some derivatives discontinuous and the model type ''​DNLP''​ needs to be used and solvers get stuck at the point with discontinuous derivatives. How can one find a a smooth approximation for max(x,0), and min(x,​0)? ​ - This comes from Prof. Ignacio Grossmann (CMU): + Here is the answer ​from Prof. Ignacio Grossmann (Carnegie Mellon University): Use the approximation ​ Use the approximation ​ - ( sqrt( sqr(x) + sqr(epsilon) )  + x ) / 2 + f(x) := ( sqrt( sqr(x) + sqr(epsilon) )  + x ) / 2 - for max(x,0), where sqrt is the square root and sqr is the square. ​ + for ''​max(x,0)''​, where ''​sqrt'' ​is the square root and ''​sqr'' ​is the square. ​ - The error err(x) in the above approximation is maximized at 0 (the + The error ''​err(x) ​= abs(f(x)-max(x,​0))'' ​in the above approximation is maximized at 0 (the - point of non differentiability),​ where err(0) = epsilon. As x goes to +/- + point of non differentiability),​ where ''​err(0) = epsilon/​2''​. As x goes to +/- - infinity, err(x) goes to 0. One can reduce ​the maximum ​error to + infinity, ​''​err(x)'' ​goes to 0. One can shift the function so the error at 0 becomes 0 but takes on - epsilon/​2 ​by using the approximation given below. This provides a + epsilon/​2 ​as x goes to +/- infinity: ​ - better approximation near the point of non smoothness but is not so + - accurate away from this point. ​ + - ( sqrt( sqr(x) + sqr(epsilon) )  + x - epsilon ) / 2 + g(x) := ( sqrt( sqr(x) + sqr(epsilon) )  + x - epsilon ) / 2 Because min(x,0) = -max(-x,0), you can use the above Because min(x,0) = -max(-x,0), you can use the above approximations for min(x,0) as well. Epsilon is a small positive approximations for min(x,0) as well. Epsilon is a small positive constant. constant. 