1

I have a 1-D function that takes so much time to compute over a big 2-D array of 'x' values, so it is much easy to create an interpolate function using SciPy facility and then compute y using it, which will be much faster. However, I cannot use the interpolation function on arrays that have more than 1-D.

Example:

# First, I create the interpolation function in the domain I want to work
x = np.arange(1, 100, 0.1)
f = exp(x) # a complicated function
f_int = sp.interpolate.InterpolatedUnivariateSpline(x, f, k=2)

# Now, in the code I do that
x = [[13, ..., 1], [99, ..., 45], [33, ..., 98] ..., [15, ..., 65]]
y = f_int(x)
# Which I want that it returns y = [[f_int(13), ..., f_int(1)], ..., [f_int(15), ..., f_int(65)]]

But returns:

ValueError: object too deep for desired array

I know I could loop over all x members, but I don't know if it is a better option...

Thanks!

EDIT:

A function like that also would do the job:

def vector_op(function, values):

    orig_shape = values.shape
    values = np.reshape(values, values.size)

    return np.reshape(function(values), orig_shape)

I've tried the np.vectorize but it is too slow...

3 Answers 3

2

If f_int wants single dimensional data, you should flatten your input, feed it to the interpolator, then reconstruct your original shape:

>>> x = np.arange(1, 100, 0.1)
>>> f = 2 * x # a simple function to see the results are good
>>> f_int = scipy.interpolate.InterpolatedUnivariateSpline(x, f, k=2)

>>> x = np.arange(25).reshape(5, 5) + 1
>>> x
array([[ 1,  2,  3,  4,  5],
       [ 6,  7,  8,  9, 10],
       [11, 12, 13, 14, 15],
       [16, 17, 18, 19, 20],
       [21, 22, 23, 24, 25]])
>>> x_int = f_int(x.reshape(-1)).reshape(x.shape)
>>> x_int
array([[  2.,   4.,   6.,   8.,  10.],
       [ 12.,  14.,  16.,  18.,  20.],
       [ 22.,  24.,  26.,  28.,  30.],
       [ 32.,  34.,  36.,  38.,  40.],
       [ 42.,  44.,  46.,  48.,  50.]])

x.reshape(-1) does the flattening, and the .reshape(x.shape) returns it to its original form.

Sign up to request clarification or add additional context in comments.

3 Comments

that will probably give weirdness around the edges
I was thinking that the interpolation will cause the values at the edges of the original array to be different, once the array is flattened out then interpolated then reshaped. Basically the left edges of the original array will get interpolated with the right edges of the next row, which may or may not cause problems.
@reptilicus That's not how interpolation works here. f_int uses interpolation internally, but when applied to an array it only considers one value at a time, not its neighbours.
1

I think you want to do a vectorized function in numpy:

#create some random test data
test = numpy.random.random((100,100))

#a normal python function that you want to apply
def myFunc(i):
    return np.exp(i)

#now vectorize the function so that it will work on numpy arrays
myVecFunc = np.vectorize(myFunc)

result = myVecFunc(test)

Comments

0

I would use a combination of a list comprehension and map (there might be a way to use two nested maps that I'm missing)

In [24]: x
Out[24]: [[1, 2, 3], [1, 2, 3], [1, 2, 3]]

In [25]: [map(lambda a: a*0.1, x_val) for x_val in x]
Out[25]: 
[[0.1, 0.2, 0.30000000000000004],
 [0.1, 0.2, 0.30000000000000004],
 [0.1, 0.2, 0.30000000000000004]]

this is just for illustration purposes.... replace lambda a: a*0.1 with your function, f_int

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.