I have a 1D array of size 18874568 created from bytes as shown below
np_img = np.array(list(data), dtype=np.uint16)
How can I convert this to a 2D array of size 3072 x 3072 and in 8bit?
The bytes data is acquired from a image capture device called as a flat panel detector (FPD). It's specified in the docs that the FPD captures 16bit image of size 3072x3072.
16bit raw image is attached below https://drive.google.com/file/d/1Kw1UeKOaBGtXNpxGsCXEk-gjGw3YaDJJ/view?usp=sharing
Edit: C# code on conversion given by the FPD support team
private Bitmap Load16bppGrayImage(string fileName, int offset=0)
{
Bitmap loadind_bmp;
var raw = File.ReadAllBytes(fileName).Skip(offset).ToArray();
var pwi = 3072;
var phi = 3072;
var dpiX = 96d;
var dpiY = 96d;
loadind_bmp = new Bitmap(pwi, phi, PixelFormat.Format48bppRgb);
BitmapData bmpData = loadind_bmp.LockBits(new Rectangle(0, 0, loadind_bmp.Width, loadind_bmp.Height), ImageLockMode.ReadWrite, loadind_bmp.PixelFormat);
int depth = Bitmap.GetPixelFormatSize(bmpData.PixelFormat) / 16;
IntPtr srcAdr = bmpData.Scan0;
unsafe
{
for (int y = 0; y < bmpData.Height; y++)
{
int pos = y * bmpData.Stride / 3;
ushort* imgPtr = (y * bmpData.Stride / 2) + (ushort*)bmpData.Scan0;
for (int x = 0; x < bmpData.Width; x++)
{
ushort value = (ushort)(raw[pos + (x * sizeof(ushort)) + 1] << 8 | raw[pos + x * sizeof(ushort)]);
imgPtr[x * depth] = value; //Blue 0-255
imgPtr[x * depth + 1] = value; //Green 0-255
imgPtr[x * depth + 2] = value; //Red 0-255
}
}
}
loadind_bmp.UnlockBits(bmpData);
return loadind_bmp;
}
sqrt(18874568/2)is > 3072, so there has to be some other data than just raw image data in there (18874568-(3072*3072*2) = 200 bytes to be exact).