Using python 3, I am trying to process a set of data in a four-column text file: The first column is the x index, the second column is the y index and the third column is the z index, or depth index. The fourth column is the data value. The values in the text file look like this:
0 0 0 0.0
1 0 0 0.0
2 0 0 2.0
0 1 0 0.0
1 1 0 0.0
2 1 0 2.0
0 2 0 0.0
1 2 0 0.0
2 2 0 2.0
0 0 1 0.0
1 0 1 0.0
2 0 1 2.0
0 1 1 0.0
1 1 1 0.0
2 1 1 2.0
0 2 1 0.0
1 2 1 0.0
2 2 1 2.0
Is there a way to construct a 3D numpy array with shape (2,3,3)?
[[[0 0 0]
[0 0 0]
[2 2 2]],
[[0 0 0]
[0 0 0]
[2 2 2]]]
While this example shows 18 rows wanting to be shaped into a (2,3,3) array, my actual data is 512x512x49 (12845056) rows and I'd like to shape them into a (512,512,49) array. If the solution could efficiently parse a greater number of rows, that would be appreciated, but I understand python has some fundamental speed limitations.
This is what I have tried so far:
import numpy as np
f = "file_path.txt"
data = np.loadtxt(f)
data = data.reshape((512,512,49))
but this gives the following error:
ValueError: cannot reshape array of size 51380224 into shape (512,512,49)
I was surprised by this error since 51380224 is not equal to the number of rows in my loaded array (12845056). Also, I suspect numpy needs information that the first, second, and third columns are not values, but indices along which to shape the values in the fourth column. I am not sure how to achieve this, and am open to solutions in either numpy or pandas.
data.shapeanddata.dtype. How about a reshape to (512,512,49,4)?data.shapereturns(12845056, 4)anddata.dtypereturnsdtype('float64'). The 4x number of rows may be due tonumpyincorrectly interpreting my four-column data set.data.reshape(512,512,49,4)returns an array with shape (512,512,49,4), but I still don't think the data are in the proper structure; I accessed the first ''depth'' of thedata.reshape(512,512,49,4)usingdata[:,:,0,0], but it does not look correct.