I'm trying to perform an evaluation of total floating-point operations (FLOPs) of a neural network.
My problem is the following. I'm using a sigmoid function. My question is how to eval the FLOPs of the exponential function. I'm using Tensorflow which relies on NumPy for the exp function.
I tried to dig into the Numpy code but didn't find the implementation ... I saw some subjects here talking about fast implementation of exponential but it doesn't really help.
My guess is that it would use a Taylor implementation or Chebychev.
Do you have any clue about this? And if so an estimation of the amount of FLOPs. I tried to find some references as well on Google but nothing really standardized ...
Thank you a lot for your answers.
F2XM1andFYL2Xinstructions that compute2^xandy*log2(x)respectively and can be considered as a single floating point operation eachexpin numpy is taken from C's math.h: github.com/numpy/numpy/blob/… Depending on architecture I think they have custom SIMD versions used in loops that will be faster (and you probably want to use those).