I want to concatenate arrays of different dimensions to feed them to my neural network that will have as first layer the AdaptiveAveragePooling1d. I have a dataset that is composed of several signals (1D arrays), each one with a different length. For example:
array1 = np.random.randn(1200,1)
array2 = np.random.randn(950,1)
array3 = np.random.randn(1000,1)
I want to obtain a tensor in which I concatenate these three signals to obtain a 2D tensor. However if I try to do
tensor = torch.Tensor([array1, array2, array3])
It gives me this error:
ValueError: expected sequence of length 1200 at dim 2 (got 950)
Is there a way to obtain such thing?
EDIT More information about the dataset:
- Each signal window represents a heart beat on the ECG registration, taken from several patients, sampled with a sampling frequency of 1000Hz
- The beats can have different lengths, because it depends on the heart rate of the patient itself
- For each beat I need to predict the length of the QRS interval (the target of the network) that I have, expressed in milliseconds
- I have already thought of interpolating the shortest samples to the the length of the longest ones, but then I would also have to change the length of the QRS interval in the labels, is that right?
I have read of this AdaptiveAveragePooling1d layer, that would allow me to input the network with samples of different sizes. But my problem is how do I input the network a dataset in which each sample has a different length? How do I group them without using a filling method with NaNs or zeros? I hope I explained myself.