First of all: your question was very very badly phrased (and your usage of linprog is not common; Variable A is typically the 2d-one).
Usually i'm hesitant to work on these kind of questions (=very incomplete), but i think i got the problem now.
What's the big problem with your question/formulation?
- You did not tell us, where the 4th column of the left table came from! Are these values fixed (independent on the remaining table), or are these somehow correlated?
- After checking the values, i understood, that these values themself (let's call them z) are a linear-function of the decisions-variables of the following form:
z = 1*x0 + 1200*x1 + 800*x2 + 800
- This is nowhere explained in your question and is important here!
How to handle the problem then?
Correct approach: ignore the constant offset of 800 & correct objective later
The following code (i'm using the variable-names in a common way) ignores the offset (a constant offset does not change the result regarding the decision-variables). Therefore, you would want to add the offset to your obtained solution later / or ignore the returned objective and calculate it yourself by the formula above!
from scipy.optimize import linprog
import numpy as np
a_ub = [[200, 100, 400], [200, 200, 300], [400, 400, 200]]
b_ub = [600, 600, 600]
c = [-1, -1200, -800]
result = linprog(c,a_ub,b_ub, bounds=(0,1))
print('*** result: (negated, offset of 800 missing) ***\n')
print(result)
Result:
*** result: (negated, offset of 800 missing) ***
fun: -2000.0
message: 'Optimization terminated successfully.'
nit: 3
slack: array([ 100., 100., 0., 1., 0., 0.])
status: 0
success: True
x: array([ 0., 1., 1.])
After negating the objective, just add the offset and you receive your desired result of 2800!
Silly approach: add some variable which is fixed to describe offset
""" Alternative formulation
We introduce a 4th variable W, which is set to 1 and introduce the offset
into the objective
"""
from scipy.optimize import linprog
import numpy as np
a_ub = [[200, 100, 400, 0], [200, 200, 300, 0], [400, 400, 200, 0]] # no upper bound for W
b_ub = [600, 600, 600]
c = [-1, -1200, -800, -800] # W*800 added to objective
a_eq = [[0, 0, 0, 1]] # W=1 fixed
b_eq = [1] # "" ""
result = linprog(c,a_ub,b_ub, a_eq, b_eq, bounds=(0,1))
print('*** alternative result (negated, including offset): ***\n')
print(result)
Result:
*** alternative result (negated, including offset): ***
fun: -2800.0
message: 'Optimization terminated successfully.'
nit: 4
slack: array([ 100., 100., 0., 1., 0., 0., 0.])
status: 0
success: True
x: array([ 0., 1., 1., 1.])
But will this work in general?
The Linear-programming approach will not work in general. As ayhan mentioned in the comments, a unimodular matrix would mean, that an LP-solver guarantees an optimal integer-solution. But without some rules about your data, this characteristic of unimodularity is not given in general!
Look at the following example code, which generates a random-instance and compares the result of a LP-solver & and a MIP-solver. The very first random-instance fails, because the LP-solution is continuous!
Of course you could stick to a MIP-solver then, but:
- MIP-problems are hard in general
- There is no MIP-solver within numpy/scipy
Code:
from scipy.optimize import linprog
import numpy as np
from cylp.cy import CyCbcModel, CyClpSimplex
from cylp.py.modeling.CyLPModel import CyLPModel, CyLPArray
np.random.seed(1)
def solve_lp(a_ub, b_ub, c):
result = linprog(c,a_ub,b_ub, bounds=(0,1))
print(result)
return result.fun
def solve_mip(a_ub, b_ub, c):
a_ub, b_ub, c = np.matrix(a_ub), np.array(b_ub), np.array(c)
n = b_ub.shape[0]
model = CyLPModel()
x = model.addVariable('x', n, isInt=True)
model += a_ub*x <= b_ub
for i in range(n):
model += 0 <= x[i] <= 1
c = CyLPArray(c)
model.objective = c*x
s = CyClpSimplex(model)
cbcModel = s.getCbcModel()
cbcModel.branchAndBound()
print('sol: ', cbcModel.primalVariableSolution['x'])
print('obj: ', cbcModel.objectiveValue)
return cbcModel.objectiveValue
def solve_random_until_unequal():
while True:
a_ub = np.random.randint(0, 1000, size=(3,3))
b_ub = np.random.randint(0, 1000, size=3)
c = [-1, -1200, -800]
lp_obj = solve_lp(a_ub, b_ub, c)
mip_obj = solve_mip(a_ub, b_ub, c)
print('A_ub: ', a_ub)
print('b_ub: ', b_ub)
assert np.isclose(lp_obj, mip_obj)
solve_random_until_unequal()
Output:
fun: -225.29335071707953
message: 'Optimization terminated successfully.'
nit: 1
slack: array([ 9.15880052e+02, 0.00000000e+00, 7.90482399e+00,
1.00000000e+00, 8.12255541e-01, 1.00000000e+00])
status: 0
success: True
x: array([ 0. , 0.18774446, 0. ])
Clp0000I Optimal - objective value 0
Clp0000I Optimal - objective value 0
Node 0 depth 0 unsatisfied 0 sum 0 obj 0 guess 0 branching on -1
Clp0000I Optimal - objective value 0
Cbc0004I Integer solution of -0 found after 0 iterations and 0 nodes (0.00 seconds)
Cbc0001I Search completed - best objective -0, took 0 iterations and 0 nodes (0.00 seconds)
Cbc0035I Maximum depth 0, 0 variables fixed on reduced cost
Clp0000I Optimal - objective value 0
Clp0000I Optimal - objective value 0
('sol: ', array([ 0., 0., 0.]))
('obj: ', -0.0)
('A_ub: ', array([[ 37, 235, 908],
[ 72, 767, 905],
[715, 645, 847]]))
('b_ub: ', array([960, 144, 129]))
Traceback (most recent call last):
File "so_linprog_v3.py", line 45, in <module>
solve_random_until_unequal()
File "so_linprog_v3.py", line 43, in solve_random_until_unequal
assert np.isclose(lp_obj, mip_obj)
AssertionError
linprogis at least weird.Ashould be 2d,bshould be 1d! So (without too much analysis on my side), your LP-formulation does not make any sense to me, especially when seeing the tables in your post.