0

I want to build an autoencoder with LSTM layers. But, at the first step of the encoder, I got an error. Could you please help me with that? Here is the model which I tried to build:

import numpy 
import torch.nn as nn

r_input    = torch.nn.LSTM(1, 1, 28) 
activation  = nn.functional.relu
mu_r      = nn.Linear(22, 6)
log_var_r = nn.Linear(22, 6)

y = np.random.rand(1, 1, 28)
def encode_r(y):
    y         = torch.reshape(y, (-1, 1, 28)) # torch.Size([batch_size, 1, 28])
    hidden    = torch.flatten(activation(r_input(y)), start_dim = 1)       
    z_mu      = mu_r(hidden)
    z_log_var = log_var_r(hidden)
    return z_mu, z_log_var 

But I got this error in my code:

RuntimeError: input.size(-1) must be equal to input_size. Expected 1, got 28. 

1 Answer 1

1

You're not creating the layer in the correct way. torch.nn.LSTM requires input_size as the first argument, but your tensor has a dimension of 28. It seems that you want the encoder to output a tensor with a dimension of 22. You're also passing the batch as the first dimension, so you need to include batch_first=True as an argument.

r_input = torch.nn.LSTM(28, 22, batch_first=True)

This should work for your specific setup. You should also note that LSTM returns 2 items, the first one is the one you want to use.

hidden = torch.flatten(activation(r_input(y)[0]), start_dim=1)

Please read the declaration on the official wiki for more information.

Sign up to request clarification or add additional context in comments.

2 Comments

Thanks . Your answer help me a lot. Actually, I am just moved from tensorflow to torch. It seems that all the details are different.
Yes, a lot of stuff changes here and there, but as soon as you get used to it you'll be fine!

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.