2
$\begingroup$

I have an encoding of my problem into a linear integer problem with binary variables only, potentially very big, where the number of binary variables is roughly one third of the total number of variables. And interested in feasibility only (no objective function).

It is a straightforward encoding (later referred to as basic), so does not result in strong formulations (if I'm not wrong with the terminology). I am trying to come up with some more optimised formulations (different from precomputing the bounds of all real variables). I had an idea to add some constraints on pairs of binary variables that rule out certain assignments to them, for example of the form $b_1 + b_2 \leq 1$, or $b_1 + b_2 \geq 1$, so that the new formulation is equivalent to the original one (I could infer them by looking at the problem).

Now, I compare the augmented formulation with the basic one, and get all possible kinds of results, where I get an improvement, no significant difference, but also where the augmented formulation is considerably slower than the basic one (up to 3 times slower).

So my question is how can it be that adding constraints that prune the search space makes the solving time worse? Even in the cases when the formulation is not feasible, so as I understand the solver has to explore the whole state space.

For illustration I post a table with results, where size is half the number of binary variables in the encoding.

enter image description here

It seems that there is a correlation with the number of added constraints. But in any case I don't understand in the second line with size 70 why they don't help to solve the problem faster when it already takes more than half an hour to solve the basic formulation? Perhaps somebody could indicate how solvers are making use of such constraints. (I was using Gurobi).

$\endgroup$

2 Answers 2

2
$\begingroup$

First, adding constraints means that computing a basic feasible solution (to the LP relaxation) at each node will likely take longer, as the matrix being factored is larger. Any time you add constraints, the question is always whether they do something sufficiently useful to compensate for the extra drag.

Second, as Larry said, the extra constraints may change the way the search tree is constructed, by moving the LP solution at an arbitrary node to a different BFS, from which a different binary variable may be rounded up or down ("different" meaning different from the corresponding node in the search tree of the original model). Even though you are only checking feasibility, that may shift the solver from branching on an "important" variable early (one where fixing it to an integer variable quickly leads to a node being pruned) to a "less important" variable (one where fixing it to either 0 or 1 still does not say much about whether a feasible solution can be constructed). This is simply a matter of luck.

Redundant constraints frequently help a constraint solver make domain reductions, but I don't think there is any guarantee that they will help there either (and, again, the extra constraints eat some CPU cycles).

$\endgroup$
1
  • $\begingroup$ Thank you! Now after having read 2 answers I think I understand! $\endgroup$ Commented May 29, 2019 at 8:36
1
$\begingroup$

If I'm understanding correctly, your constraints are redundant in the MIP formulation, meaning they do not change the feasible region -- any feasible solution to the "basic" problem is also feasible for the "augmented" problem, and vice versa. Unfortunately it can be hard to predict whether adding constraints like these will help or hurt the solution time. In some cases they will help guide the solver toward optimality more quickly, but in others they will just be "noise".

The key question to think about (in my opinion; maybe others will have different perspectives) is whether the new constraints change the feasible region of the LP relaxation. That is, are there solutions that are feasible for the LP relaxation of the basic problem but infeasible for the LP relaxation of the augmented problem? (Of course, all integer solutions that were feasible for the basic problem should still be feasible for the augmented problem.) If so, then you have managed to bring the LP feasible region closer to the convex hull of the IP feasible region, which is generally going to help the solver.

$\endgroup$
2
  • $\begingroup$ Thanks for an answer! You said that "In some cases they will help guide the solver towards optimality more quickly". In my case I am only interested in checking (in)feasibility. Does what you said still hold? Also, I don't understand if a solver is able to make a good use of such constraints. If I were using a SAT solver to guide the search, I believe such clauses would be very useful, right? While I don't know what kind of branching a MIP solver would do. $\endgroup$ Commented May 28, 2019 at 15:25
  • $\begingroup$ Yes, sorry -- same applies if you are only checking feasibility. I don't know much about SAT solvers. For MIP solvers, it's doing branch and bound, so it is solving LP relaxations of the MIP. If you can reduce the feasible region of the LP so that it more closely matches that of the MIP, that's helpful. $\endgroup$ Commented May 28, 2019 at 15:35

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.