I have an encoding of my problem into a linear integer problem with binary variables only, potentially very big, where the number of binary variables is roughly one third of the total number of variables. And interested in feasibility only (no objective function).
It is a straightforward encoding (later referred to as basic), so does not result in strong formulations (if I'm not wrong with the terminology). I am trying to come up with some more optimised formulations (different from precomputing the bounds of all real variables). I had an idea to add some constraints on pairs of binary variables that rule out certain assignments to them, for example of the form $b_1 + b_2 \leq 1$, or $b_1 + b_2 \geq 1$, so that the new formulation is equivalent to the original one (I could infer them by looking at the problem).
Now, I compare the augmented formulation with the basic one, and get all possible kinds of results, where I get an improvement, no significant difference, but also where the augmented formulation is considerably slower than the basic one (up to 3 times slower).
So my question is how can it be that adding constraints that prune the search space makes the solving time worse? Even in the cases when the formulation is not feasible, so as I understand the solver has to explore the whole state space.
For illustration I post a table with results, where size is half the number of binary variables in the encoding.
It seems that there is a correlation with the number of added constraints. But in any case I don't understand in the second line with size 70 why they don't help to solve the problem faster when it already takes more than half an hour to solve the basic formulation? Perhaps somebody could indicate how solvers are making use of such constraints. (I was using Gurobi).
