solver:feasible_solution._the_tolerances_are_minimal_and_there_is_no_change_in_objective

This shows you the differences between two versions of the page.

— |
solver:feasible_solution._the_tolerances_are_minimal_and_there_is_no_change_in_objective [2007/09/21 14:31] (current) |
||
---|---|---|---|

Line 1: | Line 1: | ||

+ | ====== ** Feasible solution. The tolerances are minimal and there is no change in objective although the reduced gradient is greater than the tolerance. ====== | ||

+ | Q: //While optimizing a non-linear minimization problem using Conopt, I occasionally (i.e., in certain cases conditions for which are not clear to me) experience the problem that the objective value appears to increase again, before the solver exits with// | ||

+ | <code> | ||

+ | ** Feasible solution. The tolerances are minimal and | ||

+ | there is no change in objective although the reduced | ||

+ | gradient is greater than the tolerance. | ||

+ | </code> | ||

+ | //Why does "worsening" of the objective happen? and why does the solver stop even after better solutions where allready found? And particularly: is there a way to prevent the model to behave like that?// | ||

+ | |||

+ | Submitted by Arne Drud: The problem is really related to what a feasible solution means. CONOPT is a feasible path method which means that it attempts to proceed through a sequence of feasible points. To make a point feasible it is necessary to do some work (check constraints and adjust variables iteratively) and this becomes more expensive | ||

+ | the more accurate the solution should be. So initially, CONOPT uses some loose tolerance that allows it to find almost feasible solutions cheaply and as we get closer to the optimum the tolerances are reduced and the solutions become more and more accurate. | ||

+ | |||

+ | The fact that feasibility is within some tolerance also means that the objective is within some tolerance. The numbers that are printed are only approximations, but CONOPT tries to keep the tolerance much smaller than the average change in objective during an iteration. The errors should therefore be without problems. | ||

+ | |||

+ | Usually this gradual reduction of tolerances works fine and you do not notice anything. However, sometimes the change in objective becomes very small quickly and the tolerance on the objective is too large. CONOPT will then reduce the tolerance and you may see that the new more accurate solution has a poorer objective value. (You may also sometimes see that CONOPT has problems maintaining feasibility -- the new much smaller tolerance are difficult to | ||

+ | satisfy). | ||

+ | |||

+ | CONOPT should not get stuck in the more accurate point but it should continue improving the (more accurate) objective until the reduced gradient becomes sufficiently small. If this does not happen, as in your case, there can be different causes: | ||

+ | - The model has discontinuous derivatives, for example from a MAX, MIN, or ABS function. The gradients that usually are used to decide in which direction to move are not good around the discontinuity and CONOPT gets stuck. There is nothing CONOPT can do -- you can change the model by introducing some kind of smooth approximation. | ||

+ | - The model has problems with scaling. If a constraint has very large terms, for example larger than 1.e6, then it can be difficult or impossible to satisfy the constraint to within 1.e-10. This would give a relative accuracy of 1.e-16, far beyond the computational accuracy of most computers. However, with a constraint accuracy of 1.e-10 and a dual variable of say 1000 you only have an accuracy of the objective of 1.e-7 and it may not be enough. |

IMPRESSUM / LEGAL NOTICE
PRIVACY POLICY
solver/feasible_solution._the_tolerances_are_minimal_and_there_is_no_change_in_objective.txt ยท Last modified: 2007/09/21 14:31 (external edit)