Starting with Gurobi 6.0.2 (GAMS 24.4.2), GAMS/Gurobi can utilize the distributed parallel algorithms offered by Gurobi:
These distributed algorithms are designed to be nearly transparent to the user. The user simply modifies a few parameters, and the work of distributing the computation among multiple machines is handled behind the scenes by the Gurobi library.
Before your program can perform a distributed optimization task, you'll need to identify a set of machines to use as your distributed workers. Ideally these machines should give very similar performance. Identical performance is best, especially for distributed tuning, but small variations in performance won't hurt your overall results too much. Once you've identified your distributed worker machines, you'll need to start Gurobi Remote Services on these machines. The setup is requires some admin knowledge and subject to frequent change, so we refer to the Gurobi web site: http://www.gurobi.com/documentation/, see “Remote Services”. You will need some extra software to run the Gurobi Remote Services. Contact support.gams.com to get access to this software.
Once the server side is set up, the client GAMS/Gurobi will need to know how to reach your workers. You'll use the WorkerPool
parameter to tell GAMS/Gurobi how to access the pool of workers. In order to use the distributed MIP solver you specify option DistributedMIPJobs
, for distributed concurrent solver you use option ConcurrentJobs
, and for distributed tuning you use option TuneJobs
. Details can be found in the GAMS/Gurobi solver manual.
Here is a log of a successful distributed MIP run with three workers:
Gurobi 24.4.2 r51223 Released Mar 14, 2015 VS8 x86 32bit/MS Windows Gurobi full + distributed license. Gurobi library version 6.0.2 Reading parameter(s) from "C:\tmp\gurobi.opt" >> workerpool 192.168.178.86,192.168.178.32,192.168.178.37 >> distributedmipjobs 3 Finished reading from "C:\tmp\gurobi.opt" Starting Gurobi... Optimize a model with 126 rows, 127 columns and 465 nonzeros Coefficient statistics: Matrix range [1e+00, 2e+01] Objective range [1e+00, 1e+00] Bounds range [1e+00, 2e+01] RHS range [1e+00, 2e+01] Started distributed worker on 192.168.178.86 Started distributed worker on 192.168.178.32 Started distributed worker on 192.168.178.37 Distributed MIP job count: 3 Job count limited by machine availability Nodes | Utilization | Objective Bounds | Work Expl Unexpl | Active Sync Comm | Incumbent BestBd Gap | It/Node Time * 0 - -0.0000000 - - - 21s 0 0 - - - - - - - 21s * 0 - 16.0000000 - - - 21s * 0 - 17.0000000 - - - 21s * 0 - 18.0000000 - - - 21s * 0 - 19.0000000 - - - 21s * 0 - 21.0000000 - - - 21s Explored 140 nodes (2495 simplex iterations) in 21.39 seconds Distributed MIP job count: 3 Time limit reached Best objective 2.100000000000e+01, best bound 2.600000000000e+01, gap 23.8095% MIP status(9): Optimization terminated due to time limit. Solving fixed MIP. Optimize a model with 126 rows, 127 columns and 465 nonzeros Coefficient statistics: Matrix range [1e+00, 2e+01] Objective range [1e+00, 1e+00] Bounds range [1e+00, 2e+01] RHS range [1e+00, 2e+01] Presolve removed 126 rows and 127 columns Presolve time: 0.00s Presolve: All rows and columns removed Iteration Objective Primal Inf. Dual Inf. Time 0 2.1000000e+01 0.000000e+00 0.000000e+00 0s Solved in 0 iterations and 0.01 seconds Optimal objective 2.100000000e+01 Fixed MIP status(2): Model was solved to optimality (subject to tolerances). MIP Solution: 21.000000 (2495 iterations, 140 nodes) Final Solve: 21.000000 (0 iterations) Best possible: 26.000000 Absolute gap: 5.000000 Relative gap: 0.192308Short URL