1

The similar question was asked before here How to Merge Numerical and Embedding Sequential Models to treat categories in RNN, but I didn't understand a clear answer. I will use the author’s @GRS materials to explain.

I'm trying to solve the problem of classifying sequences of categorical features. For this task, the vector of the last hidden state from the LSTM layer is very often used. For each categorical variable of the sequence I define learnable layer nn.Embedding.

enter image description here

The question is which approach is better and more correct: create a separate lstm layer for each feature - categorical embedding layer (as in the picture), or combine all embeddings and transfer them into a single lstm layer? The first approach I never see, but I don't understand, why it is not popular.

Thank you very much in advance

0

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.