User Tools

Site Tools


solver:reliability_of_model_if_changing_rtmaxv

Reliability of model if changing Rtmaxv

Q: We use CONPOT3 to solve our model. Although we tried to scale it properly, we failed, getting always following message:

    
 ** Infeasible solution. A variable has reached 'infinity'.
 Largest legal value (Rtmaxv) is 3.16E+07 

Modifying the option file into: Rtmaxv = 5.e+08 resulted in a feasible solution with a warning that the variance of the derivatives in the initial point is large = 6.1 and should be optimized.

Since we consider our model not to be small, and it now works successfully in our Java framework we are happy with the current solution. However we red in the GAMS manual that increasing Rtmaxv leads to reduced reliability (apart from increased solution time). Our question is therefore: are the results still reliable, if changing Rtmaxv to 5.e+08 ?

It is useful to remember that the three single most causes for NLP's to fail are:

  • bad starting points
  • poor bounds
  • poor scaling

(they are a bit related as setting a positive bound will move the default initial value away from zero).

A note from Arne Drud: There are in my mind two parts to your question: how to do the scaling to avoid the 'infinity' message, and how to interpret the message about reliability when you change 'infinity'. Let me try to answer both:

Scaling: You mention that you have tried to scale the model and still get the message about a variable reaching the internal 'infinity'. The variable that actually reaches this value should be marked, and you can use this information to scale this particular variable or block of variables. Note that the bound on large variables also applies to the objective function and intermediate terms in the objective – this seems to be the case for your model.

Reliability: CONOPT uses numerical procedures to solve the model and the solution procedure stops when certain stopping criteria are satisfied. The stopping criteria are based on many tolerances, the main ones defining feasibility and optimality. The numerical values of the tolerances are based on the assumption that the model is 'well scaled' which more or less means that the terms involved in the computation of feasibility and optimality all are around 1 and round-off errors therefore are around the relative machine precision.

When a variable, x1, is very large then we cannot satisfy a constraint like x1 = f(x2) as accurately as when x1 is around 1, simply because the value of the last bit in computing f(x2) is very different. Similarly, when deciding if the reduced cost of a variable is small enough, then a reduced cost of 0.001 on a variable around 1 is clearly not good, but what about a reduced cost of 0.001 on a variable around 1.e10? The large variable is probably optimal.

CONOPT3 will try to scale your model and use sensible tolerances to the scaled model. However, the scaling factors derived from a mixture of very small and large values may mean that the stopping criteria will be satisfied earlier than for a well scaled model, and the result you get is not as feasible or as optimal as you expect. And the distance between an almost optimal and a completely optimal solution can sometimes be fairly large, especially for poorly scaled models. In most cases the scaling will be sensible and the solution is what you expect, but scaling is tricky and your definition of a sensible scaling may not be the same as mine.

If you find the above too technical, then just think about it as a legal warning: If you do not give me a nice model (using my definition of nice) then I cannot promise to give you a nice solution (again using my own definition). But I will try.

solver/reliability_of_model_if_changing_rtmaxv.txt · Last modified: 2007/10/20 16:18 by franz