3

I want to implement my own autoencoder code using tensorflow by modifying the code:enter link description here I want to write the code into a class. The class I implement is:

import tensorflow as tf

class AutoEncoder:

    def __init__(self,input,hidden,learning_rate=0.01,training_epochs=50,
                 batch_size = 100, display_step = 10):
        print('hello,world\n')
        self.X = input
        self.hidden = hidden
        self.weights = []
        self.biases = []
        self.inputfeature = input.shape[1]
        self.learning_rate = learning_rate
        self.trainning_epochs = training_epochs
        self.batch_size = batch_size
        self.display_step = display_step
    def initialPara(self):
        weights = {
            'encoder_h1': tf.Variable(tf.random_normal([self.inputfeature,self.hidden])),
            'decoder_h1': tf.Variable(tf.random_normal([self.hidden,self.inputfeature]))
        }
        biases = {
            'encoder_b1': tf.Variable(tf.random_normal([self.hidden])),
            'decoder_b1': tf.Variable(tf.random_normal([self.inputfeature]))
        }
        self.weights = weights
        self.biases = biases
    def encoder(self,X):
        layer = tf.nn.sigmoid(
            tf.add(
                tf.matmul(X, self.weights['encoder_h1']),self.biases['encoder_b1']
            )
        )
        return layer
    def decoder(self,X):
        layer = tf.nn.sigmoid(
            tf.add(
                tf.matmul(X, self.weights['decoder_h1']),self.biases['decoder_b1']
            )
        )
        return layer

    def train(self):

        X = self.X
        batch_size = self.batch_size

        self.initialPara()

        encoder_op = self.encoder(X)
        decoder_op = self.decoder(encoder_op)

        y_pred = decoder_op
        y_true = X

        # define loss and optimizer, minimize the squared error
        cost = tf.reduce_mean(
            tf.pow(y_true-y_pred,2)
        )
        optimizer = tf.train.RMSPropOptimizer(self.learning_rate).minimize(cost)

        init = tf.initialize_all_variables()

        # launch the graph
        with tf.Session() as sess:
            sess.run(init)
            total_batch = int( X.shape[0]/batch_size )
            # training cycle
            for epoch in range(self.trainning_epochs):
                # loop over all batches
                for i in range(total_batch):
                    batch_xs = X[i*batch_size:(i+1)*batch_size]
                    _, c = sess.run([optimizer, cost], feed_dict={X: batch_xs})
                #display logs per epoch step
                if epoch % self.display_step == 0:
                    print("Epoch:", '%04d'%(epoch+1),
                          "cost=","{:.9f}".foramt(c))

            print("optimization finished!!")

        self.encoderOp = encoder_op
        self.decoderOp = decoder_op

And the class is called by a main function:

from AutoEncoder import *

import tensorflow as tf
import tflearn.datasets.mnist as mnist

from tensorflow.examples.tutorials.mnist import input_data

X,Y,testX,testY = mnist.load_data(one_hot=True)

autoencoder1 = AutoEncoder(X,10,learning_rate=0.01)

autoencoder1.train()

And an error occur:

Traceback (most recent call last):
  File "/home/zhq/Desktop/AutoEncoder/main.py", line 13, in <module>
    autoencoder1.train()
  File "/home/zhq/Desktop/AutoEncoder/AutoEncoder.py", line 74, in train
    _, c = sess.run([optimizer, cost], feed_dict={X: batch_xs})
TypeError: unhashable type: 'numpy.ndarray'

And I want to know what's wrong with my code? Thank you in advance!

ZhQ

1 Answer 1

2

The problem is you need to use placeholder if you want to feed some data during a session. For instance:

self.X = tf.placeholder(tf.float32, [None, input_dim])

Placeholders are parts of the graph to be specified by feed dictionary during a session.

You could read more about them here.

Sign up to request clarification or add additional context in comments.

3 Comments

And I have one more question: what can I do to stack 2 autoencoder for regress prediction?For example, encoder1 = autoencoder1.encoderOp; encoder2 = autoencoder2.encoderOp;How to do in the next using the two autoencoder?Could you please give me a demo?
Сould you describe 2-encoder architecture more precisely?
In my code (AutoEncoder class), I train a autoencoder layer. After the training is finished, I can get the trained layer, each layer is: layer = tf.nn.sigmoid( ... ). For exmaple, I have two layer: layer1, layer2. But, how can I stack the two layer to form a deep network? As far as my knowledge is concerned, I can stack the two layer by: deeplayer = tf.nn.sigmoid( tf.add( tf.matmul( layer1, layer2.Weight ), layer2.biase ) ). But, I don't know how to get the weight and biase of layer. Thanks!

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.