solver:distributed_mip_with_gams_cplex

Starting with Cplex 12.6 (GAMS 24.2), GAMS/CPLEX can solve mixed integer problems in a distributed fashion utilizing a number of machines. For this feature you need extra CPLEX licenses and additional software. Please contact your GAMS distributor to request a quote for this feature. Note that only the GAMS solver CplexD provides the distributed MIP facility.

The CPLEX distributed MIP is not described in the regular GAMS/CPLEX manual. This document described the setup steps necessary to enable this feature. First there is some background information about CPLEX Distributed MIP.

CPLEX offers more support for the solution in parallel of mixed integer programs (MIPs) in a distributed computing environment. This feature, known as distributed parallel MIP optimization, is a mode of running CPLEX that harnesses the power of multiple computers or of multiple nodes inside a single computer to achieve better performance on some MIP problems.

As the most recent CPLEX offering for solving certain difficult mixed integer programs, distributed parallel MIP optimization operates in two phases: first, a ramp-up phase, in which each worker applies different parameter settings to the same problem as the other workers; then, in the remainder of the solve (the second phase), each worker works in one part of a common MIP tree. Each worker communicates what it finds to the (unique) master node, which acts as the conductor or coordinator for the whole process.

Distributed MIP is based on the CPLEX remote object for distributed parallel optimization introduced in CPLEX 12.5.

Distributed parallel mixed integer programming uses a variation of the well known branch and bound algorithm to solve a MIP in parallel. In contrast to conventional branch and bound implemented on platforms with shared memory, distributed parallel MIP implements a branch and bound algorithm in an environment of distributed memory, possibly across multiple machines. The implementation can use more than a single machine to solve a given MIP, thus making it possible to solve more difficult problems than a shared memory on a single machine could solve.

This topic outlines an algorithm that implements a variation of branch and bound suitable for application across multiple machines (or multiple nodes of a single machine) to solve a difficult mixed integer program (MIP) in parallel.

This distributed parallel MIP algorithm runs on a single master associated with multiple workers. The master and the workers can be physical or virtual machines. Indeed, in this context, a virtual machine may simply be a process in the operating system of a machine. Throughout the runtime of this algorithm, the master coordinates the workers, and the workers perform the “heavy lifting” (that is, the actual solving of the MIP).

The algorithm begins by presolving the MIP on the master. After presolving, the algorithm sends the reduced model to each of the workers.

Each of the workers then starts to solve the reduced model. Each worker has its own parameter settings, possibly different from the parameter settings of other workers. Each worker solves the reduced model with its own parameter settings for a limited period of time. This phase is known as ramp up. During ramp up, each worker conducts its own search, according to its own parameter settings. Ramp up stops when the master concludes that at least one of the workers has created a sufficiently large search tree.

At that point, when ramp up stops, the master decides which of the workers performed best. In other words, the master selects a winner. The parameter settings used by the winning worker during ramp up are the basis for the master to determine which parameter settings to use in the ensuing distributed branch and bound search.

The search tree on each of the non-winning workers is deleted. The search tree of the winning worker is distributed over all workers, so that authentic distributed parallel branch and bound starts from this point. In other words, all workers now work on the same search tree, with the master coordinating the search in the distributed tree.

Distributed parallel branch and bound is similar to conventional, shared-memory branch and bound. They differ greatly, however, in their management of the search tree. In a conventional, shared-memory branch and bound, the search tree resides on a single machine, on disk or in shared memory. In contrast, distributed parallel branch and bound literally distributes the search tree across a cluster of machines.

In the CPLEX implementation of distributed parallel branch and bound, the master keeps a number of nodes of the global search tree. If a worker becomes idle, the master sends some of those nodes to that worker. The worker then starts branch and bound on those nodes. However, the worker does not simply solve a node, create some new nodes in doing so, and send them all back to the master. Instead, the worker considers the search tree node received from the master as a new MIP. The worker presolves that MIP and finds an optimal solution for that node using branch and bound. In other words, a worker not only solves a single node; in fact, the worker solves an entire subtree rooted at that node.

While this distributed parallel branch and bound algorithm is running, the master can ask a worker to provide some open nodes (that is, unsolved nodes). The master can then use these nodes to assign work to idle workers. To satisfy such a request from the master, a worker picks a few open nodes from its current tree. Because the current tree in a worker is a subtree of the global tree (indeed, it is the subtree rooted at the node sent to the worker), every node in that subtree is also a node in the global tree.

The distributed parallel branch and bound algorithm stops if all the nodes of the global search tree have been processed or if it reaches a limit set by the user. Such limits include a time limit, a limit on the number of nodes processed, a limit on the number of solutions found, or other similar criteria.

While Cplex has multiple transport protocols (e.g. MPI) GAMS/CPLEX uses exclusively the TCP/IP transport protocol.

- you need the additional software package from GAMS to run GAMS/CPLEX distributed MIP
- machines that act as workers. Upload the ZIP file with the additonal software to these machines

On each work machine create a directory and unzip the ZIP file in this directory:

mkdir c:\tmp\gamscplexdistmip cd c:\tmp\gamscplexdistmip unzip from\some\where\windows_x86_32_cpxdistmip.zip

Find out the worker IP address, e.g. via ipconfig (on Windows based systems)

Start the worker

cplex -worker=tcpip -libpath=c:\tmp\gamscplexdistmip -address=myip:myport

where myip is the name of the host or its IP address and myport is the number of the port where the worker will listen for incoming connections. (You are free to choose a different port number here.

That command starts a TCP/IP server to wait for connections from the master. The TCP/IP server also spawns worker processes as requested. The server does not terminate itself, however. You must explicitly terminate it; for example, by pressing CTRL-C when your optimization completes.

On the master machine with a regular GAMS installation create a cplexd.opt file with the following content specifying the IP addresses or names and ports of the workers:

computeserver myip1:myport1 myip2:myport2 ...

The host names and the port numbers must be the same in the cplexd option file as those used to start the TCP/IP worker on the corresponding host. Please note, that when you specify a single machine, one gets the Cplex Remote Object solving sequentially on a remote machine instead of a distributed MIP run.

Run GAMS solving a mixed integer model with CplexD and the option reading enabled.

The log of such a run will look as follows (please observe the mentioning of `Starting ramp-up`

--- Generating MIP model william --- magic.gms(81) 4 Mb --- 56 rows 46 columns 181 non-zeroes --- 15 discrete-columns --- Executing CPLEXD: elapsed 0:00:00.019 IBM ILOG CPLEX 24.2.0 .... Reading parameter(s) from "C:\tmp\cplexd.opt" >> computeserver 192.168.178.37:12345 192.168.178.24:54321 Finished reading from "C:\tmp\cplexd.opt" --- GMO memory 0.51 Mb (peak 0.51 Mb) --- Dictionary memory 0.00 Mb --- Cplex 12.6.0.0 link memory 0.00 Mb (peak 0.01 Mb) --- Starting Cplex... Tried aggregator 1 time. MIP Presolve eliminated 0 rows and 1 columns. MIP Presolve modified 6 coefficients. Reduced MIP has 55 rows, 45 columns, and 135 nonzeros. Reduced MIP has 0 binaries, 15 generals, 0 SOSs, and 0 indicators. Presolve time = 0.02 sec. (0.06 ticks) Running distributed MIP on 2 solvers. Setting up 2 distributed solvers. Setup time = 0.00 sec. (0.00 ticks) Starting ramp-up. Found incumbent of value 2942400.000000 after 0.00 sec. (0.01 ticks) MIP emphasis: balance optimality and feasibility. MIP search method: dynamic search. Parallel mode: none, using 1 thread. Root relaxation solution time = 0.00 sec. (0.09 ticks) Nodes Cuts/ Node Left Objective IInf Best Integer Best Bound ItCnt Gap * 0+ 0 2942400.0000 -313500.0000 28 110.65% Found incumbent of value 2942400.000000 after 0.00 sec. (0.26 ticks) 0 0 985514.2857 7 2942400.0000 985514.2857 28 66.51% * 0+ 0 991970.0000 985514.2857 28 0.65% Found incumbent of value 991970.000000 after 0.00 sec. (0.33 ticks) * 0 0 integral 0 988540.0000 Cuts: 8 31 0.00% Found incumbent of value 988540.000000 after 0.00 sec. (0.54 ticks) 0 0 cutoff 988540.0000 988540.0000 31 0.00% Elapsed time = 0.00 sec. (0.54 ticks, tree = 0.01 MB, solutions = 3) Mixed integer rounding cuts applied: 3 Gomory fractional cuts applied: 4 Root node processing (before b&c): Real time = 0.00 sec. (0.54 ticks) Sequential b&c: Real time = 0.00 sec. (0.00 ticks) ------------ Total (root+branch&cut) = 0.00 sec. (0.54 ticks) in callback Ramp-up finished (winner: 1). Ramp-up time = 0.50 sec. (0.54 ticks) MIP status(101): integer optimal solution. Cplex Time: 4.20sec (det. 0.61 ticks) Fixing integer variables, and solving final LP... Tried aggregator 1 time. LP Presolve eliminated 54 rows and 44 columns. Reduced LP has 1 rows, 2 columns, and 2 nonzeros. Presolve time = 0.00 sec. (0.02 ticks) Iteration log . . . Iteration: 1 Dual objective = 988540.000000 Fixed MIP status(1): optimal. Cplex Time: 0.00sec (det. 0.04 ticks) Proven optimal solution. MIP Solution: 988540.000000 (30 iterations, 0 nodes) Final Solve: 988540.000000 (1 iterations) Best possible: 988540.000000 Absolute gap: 0.000000 Relative gap: 0.000000

There are a few parameters that effect the distributed MIP. These parameters are recognized by GAMS/CPLEX but are hidden in the normal places where such parameters are listed.

The following new parameters enable you to customize this ramp-up phase for your model.

- To set the duration of the ramp-up phase, use the ramp up duration parameter,
`RampupDuration`

. - To set a deterministic time limit on the ramp-up phase, use the deterministic time spent in ramp up during distributed parallel optimization parameter,
`RampupDetTimeLimit`

- To set a wall-clock time limit in seconds on the ramp-up phase, use the time spent in ramp up during distributed parallel optimization parameter,
`RampupTimeLimit`

.

During the ramp up phase of distributed parallel optimization, each worker applies different parameter settings to the same problem as the other workers. In other words, there is a competition among the workers to process the greatest number of nodes in parallel in the search tree of the distributed problem. At any given time, each worker is a candidate to be the winner of this competition.

This parameter enables you to customize the ramp up phase for your model. Its value has an impact on both timing parameters: time spent in ramp up during distributed parallel optimization and deterministic time spent in ramp up during distributed parallel optimization.

When the value of this parameter is -1, CPLEX turns off ramp up and ignores both of the parameters time spent in ramp up during distributed parallel optimization and deterministic time spent in ramp up during distributed parallel optimization. CPLEX directly begins distributed parallel tree search.

When the value of this parameter is 2, CPLEX observes ramp up with an infinite horizon. CPLEX ignores both of the parameters time spent in ramp up during distributed parallel optimization and deterministic time spent in ramp up during distributed parallel optimization. CPLEX never switches to distributed parallel tree search. This situation is also known as concurrent mixed integer programming (concurrent MIP).

When the value of this parameter is 1 (one), CPLEX considers the values of both time spent in ramp up during distributed parallel optimization and deterministic time spent in ramp up during distributed parallel optimization.

- If both ramp up timing parameters are at their default value (effectively, an infinite amount of time), then CPLEX terminates ramp up according to internal criteria before switching to distributed parallel tree search.
- If one or both of the ramp up timing parameters is set to a non default finite value, CPLEX observes that time limit by executing ramp up for that given amount of time. If the two time limits differ, CPLEX observes the smaller time limit before terminating ramp up and switching to distributed parallel tree search.

When the value of this parameter remains at its default, 0 (zero), CPLEX considers the values of both timing parameters time spent in ramp up during distributed parallel optimization and deterministic time spent in ramp up during distributed parallel optimization.

- If at least one of the ramp up timing parameters is set to a finite value, then CPLEX behaves as it does when the value of this parameter is 1 (one): first ramping up, then switching to distributed parallel tree search.
- If both of the ramp up timing parameters are at their default value (effectively an infinite amount of time), then CPLEX behaves as it does when the value of this parameter is 2: concurrent MIP.

Note: CPLEX behavior at default values is subject to change in future releases.

This parameters specifies a limit on the amount of time measured in deterministic ticks to spend in the ramp up phase of distributed parallel optimization. This parameter is effective only when the ramp up duration parameter has a value of 0 (zero) or 1 (one), where 0 (zero) designates the default automatic value that CPLEX decides the ramp up duration, and 1 (one) designates dynamic ramp up. See ramp up duration for more detail about the conditions for time limits in ramp up.

This parameters specifies a limit on the amount of time measured in deterministic ticks to spend in the ramp up phase of distributed parallel optimization. This parameter is effective only when the ramp up duration parameter has a value of 0 (zero) or 1 (one), where 0 (zero) designates the default automatic value that CPLEX decides the ramp up duration, and 1 (one) designates dynamic ramp up. See ramp up duration for more detail about the conditions for time limits in ramp up.

The value 0 (zero) specifies that no time should be spent in ramp up.

Any positive number strictly greater than zero specifies a time limit in deterministic ticks.

The default value is 1e+75 deterministic ticks.

This parameters specifies a limit on the amount of time in seconds to spend in the ramp up phase of distributed parallel optimization. This parameter is effective only when the ramp up duration parameter has a value of 0 (zero) or 1 (one), where 0 (zero) designates the default automatic value that CPLEX decides the ramp up duration, and 1 (one) designates dynamic ramp up. See ramp up duration for more detail about the conditions for time limits in ramp up.

The value 0 (zero) specifies that no time should be spent in ramp up.

Any positive number strictly greater than zero specifies a time limit in seconds.

The default value is 1e+75 seconds.

Short URL
IMPRESSUM / LEGAL NOTICE
PRIVACY POLICY
solver/distributed_mip_with_gams_cplex.txt · Last modified: 2017/05/22 07:48 by admin