3

I have a function in python which looks like this:

import numpy as np    

def fun(Gp,Ra,Mr,Pot,Sp,Mc,Keep):
   if(Keep==True):
     return(Pot*np.tanh((Gp+Ra+Mr+ Mc)*Sp ))

Assuming the following data:

   import pandas as pd
dt_org = pd.DataFrame({"RA": [0.5, 0.8, 0.9],
                   "MR": [0.97, 0.95, 0.99],
                   "POT": [0.25, 0.12, 0.05],
                   "SP": [0.25, 0.12, 0.15],
                   "MC": [50, 75, 100],
                   "COUNTRY": ["GB", "IR", "GR"]
                   })

I have in total 100 GP and i want to allocate all of them properly in order to maximize the objective_function:

under the restriction that all the 3 elements are positive

According to this post the scipy.optimize would be the way to go, but i am confused in order how to write the problem down

Update: my try

from scipy.optimize import minimize

y = {'A': {'RA': 0.5, 'MR': 0.97, 'POT': 0.25, 'SP': 0.25, 'MC': MC_1, 'keep': True},
     'B': {'RA': 0.8, 'MR': 0.95, 'POT': 0.12, 'SP': 0.12, 'MC': MC_2, 'keep': True},
         'C': {'RA': 0.9, 'MR': 0.99, 'POT': 0.05, 'SP': 0.15, 'MC': MC_3, 'keep': True}}

def objective_function(x):
                return(
                     -(fun(x[0], Ra=y['A']['RA'], Mr=y['A']['MR'],
                                             Pot=y['A']['POT'], Sp=y['A']['SP'],
                                             Mc=y['A']['MC'], Keep=y['A']['keep']) +
                       fun(x[1], Ra=y['B']['RA'], Mr=y['B']['MR'],
                                             Pot=y['B']['POT'], Sp=y['B']['SP'],
                                             Mc=y['B']['MC'], Keep=y['B']['keep']) +
                       fun(x[2], Ra=y['C']['RA'], Mr=y['C']['MR'],
                                             Pot=y['C']['POT'], Sp=y['C']['SP'],
                                             Mc=y['C']['MC'], Keep=y['C']['keep']))
                )

cons = ({'type': 'ineq', 'fun': lambda x:  x[0] + x[1] + x[2] - 100})

bnds = ((0, None), (0, None), (0, None))

minimize(objective_function, x0=[1,1,1],   args=y, method='SLSQP', bounds=bnds,
             constraints=cons)

The problem now is that i get the error ValueError: Objective function must return a scalar, whereas the output of the fun function is a scalar

UPDATE 2 (after @Cleb comment) So now i changed the function in:

def objective_function(x,y):

                temp =   -(fun(x[0], Ra=y['A']['RA'], Mr=y['A']['MR'],
                                             Pot=y['A']['POT'], Sp=y['A']['SP'],
                                             Mc=y['A']['MC'], Keep=y['A']['keep']) +
                       fun(x[1], Ra=y['B']['RA'], Mr=y['B']['MR'],
                                             Pot=y['B']['POT'], Sp=y['B']['SP'],
                                             Mc=y['B']['MC'], Keep=y['B']['keep']) +
                       fun(x[2], Ra=y['C']['RA'], Mr=y['C']['MR'],
                                             Pot=y['C']['POT'], Sp=y['C']['SP'],
                                             Mc=y['C']['MC'], Keep=y['C']['keep']))


                print("GP for the 1st: " + str(x[0]))
                print("GP for the 2nd: " + str(x[1]))
                print("GP for the 3rd: " + str(x[2]))
        return(temp)

cons = ({'type': 'ineq', 'fun': lambda x:  x[0] + x[1] + x[2] - 100})

bnds = ((0, None), (0, None), (0, None))

Now there are 2 problems: 1. the values of x[0],x[1],x[2] are really close to each other

  1. the sum of x[0],x[1],x[2] is over 100
11
  • 3
    Read their tutorial (scipy.optimize; tutorial != API docs; it's quite good) and try something. You may be confused, but not trying and showing anything always looks like: write that code for me. Commented Jan 17, 2018 at 14:02
  • please see update Commented Jan 17, 2018 at 15:42
  • 1
    That's code but there is no description about potential problems it has. Commented Jan 17, 2018 at 15:52
  • 1
    I might miss something but in your objective_function, you only pass x but not y; in fun you only pass parameters but no x (this function seems to be independent of x, that seems funky). And I also agree with sascha: an actual question would help to help you :) Commented Jan 17, 2018 at 19:14
  • @Cleb i pass y in the args argument in the minimize function. The problem now is that i get the error ValueError: Objective function must return a scalar, whereas the output of the fun function is a scalar Commented Jan 18, 2018 at 15:02

1 Answer 1

1

There is a general issue regarding your objective function that explains why the values you obtain are very close to each other; it is discussed below.

If we first look at the technical aspect, the following works fine for me:

import numpy as np
from scipy.optimize import minimize


def func(Gp, Ra, Mr, Pot, Sp, Mc, Keep):
    if Keep:
        return Pot * np.tanh((Gp + Ra + Mr + Mc) * Sp)


def objective_function(x, y):

    temp = -(func(x[0], Ra=y['A']['RA'], Mr=y['A']['MR'], Pot=y['A']['POT'], Sp=y['A']['SP'], Mc=y['A']['MC'], Keep=y['A']['keep']) +
             func(x[1], Ra=y['B']['RA'], Mr=y['B']['MR'], Pot=y['B']['POT'], Sp=y['B']['SP'], Mc=y['B']['MC'], Keep=y['B']['keep']) +
             func(x[2], Ra=y['C']['RA'], Mr=y['C']['MR'], Pot=y['C']['POT'], Sp=y['C']['SP'], Mc=y['C']['MC'], Keep=y['C']['keep']))

    return temp


y = {'A': {'RA': 0.5, 'MR': 0.97, 'POT': 0.25, 'SP': 0.25, 'MC': 50., 'keep': True},
     'B': {'RA': 0.8, 'MR': 0.95, 'POT': 0.12, 'SP': 0.12, 'MC': 75., 'keep': True},
     'C': {'RA': 0.9, 'MR': 0.99, 'POT': 0.05, 'SP': 0.15, 'MC': 100., 'keep': True}}

cons = ({'type': 'ineq', 'fun': lambda x:  x[0] + x[1] + x[2] - 100.})

bnds = ((0., None), (0., None), (0., None))

print(minimize(objective_function, x0=np.array([1., 1., 1.]), args=y, method='SLSQP', bounds=bnds, constraints=cons))

This will print

    fun: -0.4199999999991943
     jac: array([ 0.,  0.,  0.])
 message: 'Optimization terminated successfully.'
    nfev: 6
     nit: 1
    njev: 1
  status: 0
 success: True
       x: array([ 33.33333333,  33.33333333,  33.33333333])

As you can see, x nicely sums up to 100.

If you now change bnds to e.g.

bnds = ((40., 50), (0., None), (0., None))

then the result will be

     fun: -0.419999999998207
     jac: array([ 0.,  0.,  0.])
 message: 'Optimization terminated successfully.'
    nfev: 6
     nit: 1
    njev: 1
  status: 0
 success: True
       x: array([ 40.,  30.,  30.])

Again, the constraint is met.

One can also see that the objective value is the same. That seems to be due to the fact that Mc and Gp are very large, therefore, np.tanh will always just return 1.0. That implies that you always return just the value Pot from func for all your three dictionaries in y. If you sum up the three corresponding values

0.25 + 0.12 + 0.05

you indeed get the value 0.42which is determined by the optimization.

Sign up to request clarification or add additional context in comments.

7 Comments

For these inputs: y = {'A': {'RA': 0.5, 'MR': 0.97, 'POT': 0.99, 'SP': 0.5, 'MC': 50., 'keep': True}, 'B': {'RA': 0.8, 'MR': 0.95, 'POT': 0, 'SP': 0.0000001, 'MC': 7., 'keep': True}, 'C': {'RA': 0.9, 'MR': 0.99, 'POT': 0.05, 'SP': 0.0000001, 'MC': 1., 'keep': True}} cons = ({'type': 'ineq', 'fun': lambda x: x[0] + x[1] + x[2] - 100.}) bnds = ((0., None), (0., None), (0., None)) i get ` x: array([ 33.33333333, 33.33333333, 33.33333333])` which cannot be correct because y['B']['POT'] is 0, so it should get 0 GP, right ?
@quant: This result is totally fine: With Pot=0, func returns 0; so only the other two entries in y contribute to your objective. Apparently, the constraint you apply is not limiting so it does not matter whether you set them to 33 or 5 i.e. you don't "waste" resources by allocating Gp to a component that contributes 0. You can easily test this by setting your constraint to 10 instead of 100; then your objective value does not change as your constraint is not limiting. If you decrease it further, change your initial conditions in such a way that they meet your constraint!
But i want to maximize the sum of these 3 quantities. So allocating GP do a quantity that will give 0 in any case, is a waste of the resources, since it will not add anything to the sum, right ? Which means the objective_function would have a larger (in absolute) value if it allocated the 100 GP to the other two quantities instead of all 3, no ?
@quant: No, as explained above. Your constraint is not limiting. When you set Gp to 100 you receive the same objective value as for Gp=10, so you don't lose anything if you distribute your resource to a component that does not contribute to your objective. The tanh saturates quite quickly, so func will often just return the same value independent of your input.
I saw what my misunderstanding was. The maximum of the tanh function is reached at around 2 and i gave big values to the y. Now i get it. Thanks !
|

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.