1

I use tensorflow 2.1 with customize layer as follow:

class Mylayer(KL.layer):
    def __init__(self, name):
        super(Mylayer, self).__init__(name)
        self.conv = KL.Conv2D(32)

    def call(self, inputs):
        outputs = self.conv(inputs)
        np.save('outputs.npy', outputs)
        return outputs

However, whether I decorate tf.function at train_step or not, np.save says cannot convert a symbolic tensor to numpy array. If I change to np.save('outputs.txt', outputs.numpy()) without using tf.function, it shows that tensor object has no attribute numpy. Also, call() function seems to be called twice with symbolic tensor in first time and eager tensor in second time when not using tf.function.

How do I save the tensor value inside call()?

5
  • Why not just save the output, and not from inside of the call method? Commented Jul 17, 2020 at 7:42
  • Because I use pretrained weight from pytorch version, and I want to verify all outputs layer by layer Commented Jul 21, 2020 at 2:15
  • I don't understand how that that changes anything. You have a MyLayer, when you create the model that uses it, grab it's output, and add that as an output to the model. Commented Jul 21, 2020 at 5:38
  • MyLayer is just an example. The actual layer which is implemented by me is OctConv(github.com/facebookresearch/OctConv). My model consists of series of OctConvs. Terefore, gathering all high/low frequency outputs makes model's outputs very large. Commented Jul 21, 2020 at 7:16
  • In practice, the actually method call doesn't get called much. It just helps create the graph. If you cannot handle so many outputs at one time, then you might separate your model into multiple models, and chain the outputs. Sounds like a pain, and I am curious to see if you get a good answer to this question. You might also look into how tensorboard is handling this, because it will let you monitor tensors. Commented Jul 21, 2020 at 14:14

1 Answer 1

1

Keras models are implicitly compiled into static graphs, whether you use @tf.function in the call method or not. Consequently, all tensors are of type tf.Tensor and not of type tf.EagerTensor and therefore don't have the numpy() method.

To overcome this, simply pass dynamic=True to the constructor of the model that uses the layer. You will then be able to use the numpy() method

But remember, doing so may significantly increase training and inference times.

Sign up to request clarification or add additional context in comments.

2 Comments

I only find dynamic in layer's constructor and run_eagerly as model's property. Should I set dynamic=True for each layer, or set run_eagerly for whole model is enough?
run_eagerly for the whole model is enough.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.