1

Im trying to adapt the Keras VAE example to a deep network by adding one more layer.

Original code: Original VAE code

CHANGES:

batch_size = 200
original_dim = 784
latent_dim = 2
intermediate_dim_deep = 384 # <<<<<<<
intermediate_dim = 256
nb_epoch = 20
#
x = Input(batch_shape=(batch_size, original_dim))
x = Dense(intermediate_dim_deep, activation='relu')(x) # NEW LAYER <<<<<<
h = Dense(intermediate_dim, activation='relu')(x)
z_mean = Dense(latent_dim)(h)
z_log_var = Dense(latent_dim)(h)
#
def sampling(args):
    z_mean, z_log_var = args
    epsilon = K.random_normal(shape=(batch_size, latent_dim), mean=0.)
    return z_mean + K.exp(z_log_var / 2) * epsilon

# note that "output_shape" isn't necessary with the TensorFlow backend
z = Lambda(sampling, output_shape=(latent_dim,))([z_mean, z_log_var])
#
# we instantiate these layers separately so as to reuse them later
decoder_h = Dense(intermediate_dim, activation='relu')
decoder_d = Dense(intermediate_dim_deep, activation='rely') # NEW LAYER <<<<<<
decoder_mean = Dense(original_dim, activation='sigmoid')
h_decoded = decoder_h(z)
d_decoded = decoder_d(h_decoded) # ADDED ONE MORE STEP HERE <<<<<<<
x_decoded_mean = decoder_mean(d_decoded)
#
def vae_loss(x, x_decoded_mean):
    xent_loss = original_dim * objectives.binary_crossentropy(x, x_decoded_mean)
    kl_loss = - 0.5 * K.sum(1 + z_log_var - K.square(z_mean) - K.exp(z_log_var), axis=-1)
    return xent_loss + kl_loss
#
vae = Model(x, x_decoded_mean)
vae.compile(optimizer='rmsprop', loss=vae_loss)
#####

Compile I've me this error:

/usr/local/lib/python2.7/dist-packages/keras/engine/topology.py:1615: UserWarning: Model inputs must come from a Keras Input layer, they cannot be the output of a previous non-Input layer. Here, a tensor specified as input to "model_1" was not an Input tensor, it was generated by layer dense_1.
Note that input tensors are instantiated via `tensor = Input(shape)`.
The tensor that caused the issue was: None
  str(x.name))
---------------------------------------------------------------------------
Exception                                 Traceback (most recent call last)
<ipython-input-8-c9010948cdee> in <module>()
----> 1 vae = Model(x, x_decoded_mean)
      2 vae.compile(optimizer='rmsprop', loss=vae_loss)

/usr/local/lib/python2.7/dist-packages/keras/engine/topology.pyc in __init__(self, input, output, name)
   1788                                 'The following previous layers '
   1789                                 'were accessed without issue: ' +
-> 1790                                 str(layers_with_complete_input))
   1791                     for x in node.output_tensors:
   1792                         computable_tensors.append(x)

Exception: Graph disconnected: cannot obtain value for tensor input_1 at layer "input_1". The following previous layers were accessed without issue: []

I have the other examples in the repo and it seems a valid way to do it. Am I missing something?

1 Answer 1

1

When adding the new hidden layer you're overriding the x variable so you're left without an input layer. Also, is 'rely' a valid activation option?

Sign up to request clarification or add additional context in comments.

1 Comment

thanks i read the code a million time and I couldn't see it!!! x was replaced. the rely was a typo from copy here

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.