1

I was using tensorflow 2.1.0 to build a model, and while I build a custom layer, which need to convert a to numpy array, problem occurred. I explicitly remember aTensor.numpy() is a real thing, so it must be something I did wrong, could anyone tell me how can I fix it? I'm still a noob on tensorflow.

Here are the codes(the code is about a layer, not the whole model):

class CIN_Layer(tf.keras.layers.Layer):
    def __init__(self, in_shape):
        super(CIN_Layer, self).__init__()
        self.in_shape = in_shape

    #this is the custom part
    def get3DTensor(self, inputs, lastLayerOutput=None):
        print(type(inputs))
        inputs = inputs.numpy()#FIX HERE: problem occurs here

        interaction = []
        if lastLayerOutput == None:
            lastLayerOutput = inputs.copy()
        else:
            lastLayerOutput = lastLayerOutput.numpy()

        for i in range(inputs.shape[0]):
            interaction.append(np.dot(inputs[i].reshape([-1,1]), lastLayerOutput[i].reshape([1,-1])))

        return tf.convert_to_tensor(np.array(interaction))

    def build(self, input_shape):
        self.kernel = self.add_weight('CIN_kernel', shape=[self.in_shape[-1] for i in range(3)])

    def call(self, inputs, lastLayerOutput=None):
        interaction = self.get3DTensor(inputs, lastLayerOutput)

        return tf.reduce_sum(tf.matmul(inputs, self.kernel))
inputs = tf.keras.layers.Input(shape=(5,10))
cin_layer = CIN_Layer(in_shape=(5,10))
lastLayerOutput = cin_layer(inputs) 
output = tf.keras.layers.Dense(1)(lastLayerOutput)  

model = tf.keras.Model(inputs=inputs, outputs=output)  
model.compile(loss='mean_squared_error', optimizer=optimizer)  
model.summary()

If there are other ways to insert some numpy code in a tensorflow model, please do tell.

1 Answer 1

2

In tensorflow 2.0 there are two types of objects Tensor and EagerTensor. Only the EagerTensor has numpy() method associated with it and EagerTensors are those whose values are readily available during runtime eg. tf.ones(2,3) will create a EagerTensor as we know it's value and any operations performed on it will give out a eagerTensor. In your code during the layer definition, parameter 'inputs' to call method is a normal tensor whose value is known only during the graph execution(forward pass) and so you cannot call a numpy method on it. During the forward pass of tensorflow you should do your operations only using tensor but cannot alternate between tensors and numpy arrays(it makes tracing graph for backprop impossible)

Sign up to request clarification or add additional context in comments.

1 Comment

thanks for sharing that, really helped. It appears now I have to find another way to finish this model....wondering wether I can get my diploma or not on this model design again( ´Д`)y━・~~

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.