Blender returns texture images as a flat array of pixel values (RGBA, each stored as single value in a flat array of size width * height * 4).
How can I transform that into a Numpy array and then load that into an OpenCV image?
I am currently trying this:
i = 1
for img in bpy.data.images:
print(img)
print(img.name, img.users, img.file_format)
print('load start')
img_arr = np.array(img.pixels)
print(img_arr.shape)
img_arr = img_arr.reshape([ img.size[1], img.size[0], 4 ])
print(img_arr.shape)
print('load end')
cv2.imwrite('out_cv2_' + str(i) + '.png', img_arr)
i = i + 1
But I get blank images of the right size.
This is similar to this question but for OpenCV in Python.
I am aware that I could save the images to file like this:
img.filepath = 'out' + str(i) + '.png'
img.file_format = 'PNG'
img.save()
but what I'm trying to get to is an intermediate step to manipulating the images in OpenCV, which I'd like to do in memory
I've also seen this answer but unfortunately it crashes Blender.