Approaches to solve mathematical programming problems

Get Complete Project Material File(s) Now! »

Classes of mathematical programming problems

We can now propose a classification of the MP problems formulated in the very general form (1.1). Remember that the set X is given by the bounds and kinds (as integer, continuous, or discrete) of the variables, and by the constraints of the problem, which are usually on the form g(x) ≤ 0 or g(x) = 0.
• Linear Programming (LP): the objective function and the constraints are linear, and the variables are continuous;
• Mixed Integer Linear Programming (MILP or MIP): the objective function and the constraints are linear, and at least one variable is integer;2 2Actually MILP is a special case of NLP, as the integrality of a variable xj can be expressed by the nonlinear constraint sin(xj) = 0. Nevertheless MILP is separated from NLP because there exist specific techniques to solve integer problems, as shown in Section If all the variables are integer, sometimes ILP (Integer Linear Programming) is used in place of MILP to refer to the problem.
• convex Nonlinear Programming (cNLP): the objective function and the constraints are convex with at least one of them being nonlinear, and the variables are continuous;
• Nonlinear Programming (NLP): at least one among the objective function and the constraints is nonlinear, and the variables are continuous;
• convex Mixed Integer Nonlinear Programming (cMINLP): the objective function and the constraints are convex with at least one of them being nonlinear, and at least one variable is integer;
• Mixed Integer Nonlinear Programming (MINLP): at least one among the objective function and the constraints is nonlinear, and at least one variable is integer.
We can further write the following relationships: LP ⊂ MILP ⊂ cMINLP ⊂ MINLP and LP ⊂ cNLP ⊂ NLP ⊂ MINLP. The meaning is that if a solver can be employed for a given class of problems C, then it can also be employed for problems of all the classes D ⊂ C. For instance, a MINLP solver can be employed to solve a NLP problem, but a NLP solver working on a MINLP instance will ignore the integrality constraints on the variables. These relationships give also an intuitive idea about the complexity of the problems of the different categories. In general D ⊂ C means that D is easier to solve than C. Hence, LP problems are usually the easiest to solve, whereas MINLPs are the most difficult. Note that usually convex problems are easier to solve than nonconvex ones, since, in the former, local optima are also global optima, as explained earlier.
It is possible to go further into detail with the categorization of MP problems, but for this thesis the previous classification suffices. The clustering problems presented in Chapter 2 are MINLPs and cMINLPs, and we reformulate them as MILPs. In Chapter 3 the Packing Equal Circles in a Square (PECS) problem is an example of nonconvex NLP problem, and it is reformulated into another nonconvex NLP problem. Finally, the problems presented in Chapter 4 can be either MINLPs or NLPs, and they are reformulated respectively as MILPs and LPs. At this point, the most natural questions are the following: which are the techniques used to solve these MP problems, and how the fact that a problem falls into one of the categories presented above affects the choice of the solution method? This is the subject of the next section.

Approaches to solve mathematical programming problems

In this section we present a brief summary of the techniques employed to solve MP problems belonging to the different classes presented in the previous section. If not specified, the variables are considered to belong to R.

Linear programming

In a LP problem the constraints and the objective function are linear. In its standard form, a LP problem can be expressed as: min cT x s.t. Ax = b x ≥ 0,
where cT is the n dimensional row vector of coefficients for the objective function, A is the m × n matrix constraints, b is the m dimensional column vector representing the right-hand side of the constraints, and x is the n dimensional column vector of the nonnegative variables of the problem. The feasible region of such a problem is a convex set called convex polyhedron, having a finite number of vertices (i.e., points which cannot be expressed as strict convex combination of two any other points of the polyhedron). If the polyhedron is bounded it is called polytope. The importance of this concepts in LP is that the optimal solution of a LP problem corresponds to a vertex of the polytope representing the feasible region. This has been the key observation at the base of the simplex algorithm, that is an algorithm which starts from a vertex of the polyhedron and moves to another adjacent vertex as long as the objective function improves. The procedure stops when the vertex representing the optimal solution is reached. This is the main idea, but a lot of details are missed (e.g., how to perform this move from a vertex to a better one, how to know if the optimal vertex is found). For more informations, see [56, 70]. Although this algorithm has an exponential complexity in the worst case, it is efficient in practice. However, in 1979 Khachiyan proved that LP can be solved in polynomial time, proposing the ellipsoid method, that is an interior point algorithm. In 1984 Karmarkar proposed a better polynomial time interior point method to solve LP problems [139]. The interior point methods are algorithms that find the optimal solution by moving on the interior of the polytope representing the feasible region, and not on the vertices as the simplex method. Regarding the efficiency, it is not clear which one between the simplex and the interior point algorithm performs better, since it depends on the problem itself. As consequence, LP solvers like CPLEX implement both methods. LPs are important because a lot of real-world problems can be described in this way. Moreover, LPs arise during the solution process of other categories of MP problems, as for example MILPs.

READ  eep Inference and Nested Proof Systems

Table of contents :

1 Introduction 
1.1 Motivations
1.2 Mathematical programming
1.2.1 Classification of mathematical programming problems Convexity Classes of mathematical programming problems
1.2.2 Approaches to solve mathematical programming problems Linear programming Mixed integer linear programming Nonlinear and convex nonlinear programming Convex mixed integer nonlinear programming Mixed integer nonlinear programming
1.3 Reformulations
1.3.1 Classification of reformulations Exact reformulations Narrowings Relaxations
1.4 Contributions
2 Clustering in general and bipartite graphs 
2.1 Definitions and notation
2.2 Clustering based on modularity maximization
2.2.1 Hierarchical divisive heuristic Reduction of number of variables and constraints . Binary decompositions Symmetry breaking constraint Numerical results
2.2.2 Extension to bipartite graphs Fortet linearization Square reformulation Binary decomposition Numerical results
2.3 Clustering based on strong and almost-strong conditions
2.3.1 Strong communities detection
2.3.2 Almost-strong communities detection
2.3.3 Comparison between SC and ASC
2.4 Conclusions
3 Circle packing in a square 
3.1 Mathematical programming formulations
3.2 Detection of symmetries for circle packing
3.2.1 Definitions and notation
3.2.2 Automatic symmetry detection
3.2.3 Symmetric structure of circle packing
3.3 Order symmetry breaking constraints
3.3.1 Weak constraints
3.3.2 Strong constraints
3.3.3 Mixed constraints
3.3.4 Numerical results
3.4 Other constraints
3.4.1 Fixing points symmetry breaking constraints
3.4.2 Bounds symmetry breaking constraints
3.4.3 Triangular inequality constraints
3.4.4 Numerical results
3.5 A conjecture about the reduction of the search space
3.6 Conclusions
4 Primal and dual convex relaxations for multilinear terms 
4.1 Definitions and notation
4.2 Primal relaxation
4.2.1 Bilinear terms McCormick’s inqualities Fortet inequalities
4.2.2 Trilinear terms: Meyer-Floudas inequalities
4.2.3 Quadrilinear terms
4.3 Dual relaxation
4.3.1 Example
4.4 Comparison and numerical results
4.5 Conclusions
5 Conclusions 


Related Posts