Q: While optimizing a non-linear minimization problem using Conopt, I occasionally (i.e., in certain cases conditions for which are not clear to me) experience the problem that the objective value appears to increase again, before the solver exits with
** Feasible solution. The tolerances are minimal and there is no change in objective although the reduced gradient is greater than the tolerance.
Why does “worsening” of the objective happen? and why does the solver stop even after better solutions where allready found? And particularly: is there a way to prevent the model to behave like that?
Submitted by Arne Drud: The problem is really related to what a feasible solution means. CONOPT is a feasible path method which means that it attempts to proceed through a sequence of feasible points. To make a point feasible it is necessary to do some work (check constraints and adjust variables iteratively) and this becomes more expensive the more accurate the solution should be. So initially, CONOPT uses some loose tolerance that allows it to find almost feasible solutions cheaply and as we get closer to the optimum the tolerances are reduced and the solutions become more and more accurate.
The fact that feasibility is within some tolerance also means that the objective is within some tolerance. The numbers that are printed are only approximations, but CONOPT tries to keep the tolerance much smaller than the average change in objective during an iteration. The errors should therefore be without problems.
Usually this gradual reduction of tolerances works fine and you do not notice anything. However, sometimes the change in objective becomes very small quickly and the tolerance on the objective is too large. CONOPT will then reduce the tolerance and you may see that the new more accurate solution has a poorer objective value. (You may also sometimes see that CONOPT has problems maintaining feasibility – the new much smaller tolerance are difficult to satisfy).
CONOPT should not get stuck in the more accurate point but it should continue improving the (more accurate) objective until the reduced gradient becomes sufficiently small. If this does not happen, as in your case, there can be different causes: