1

I'm trying to do a simple rendering loading data from TIFF images into numpy matrices and making a 3D texture. Tiff images are intensity images, I was hoping to remap this intensity image as a luminance or alphacolor image.

While I think I understand the process of GL_Texture_3D, with no errors running the code I still obtain no visual result. I certainly am doing something simple very wrong. Could anyone help me understand my errors?

This is a short version of the code [corrected]:

Edit 2: trying to cast types correctly: as suggested by Reto Koradi I might be facing a casting issue. So I convert numpy array to uint8 making sure values are different than all zeroes and pass it to the channel_RGBA_buffer uint8 arrray. Still produces a black window.

Edit 3: using the Glumpy approach: reading the function "textures" of Glumpy ( https://code.google.com/p/glumpy/source/browse/glumpy/image/texture.py?r=8cbc2aa2b277c07b59fba964edd327370a8e8091 ) I changed the target GL format GL_RGBA to GL_RGBA16 and added glBindTexture( GL_TEXTURE_3D, 0 ) after calling glTexImage3D. Still unsure as to why 0 and not the texture.

Edit 4: fixed indentation of 2 lines to make example executable

Result is still a black window but when closing this window, for an instant, I see the data appearing. Seems like I'm putting an extra layer.

import sys

import numpy
from OpenGL.GL import *
from OpenGL.GL.shaders import *
from OpenGL.arrays import vbo
from OpenGL.GLU import *
from OpenGL.GLUT import *
from OpenGL.GLUT.freeglut import *




class test_TIFF_render:

    def __init__(self):
        self.something = True

    def load_texture(self):

        global texture


        #read file as array, for simplicity I will just make a random numpy 3D matrix 
        tiff_mean = numpy.random.rand(3,300,300)
        tiff_mean = tiff_mean*1000
        tiff_mean = tiff_mean.astype('uint8')

        shape_array = tiff_mean.shape
        shape_array = tiff_mean.shape
        #we have the data now, let's set the texture
        tiff_mean = tiff_mean.reshape(tiff_mean.shape[1]*tiff_mean.shape[2]*tiff_mean.shape[0])
        texture = glGenTextures(1)
        glBindTexture(GL_TEXTURE_3D,texture)
        glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE)
        glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_S,GL_CLAMP_TO_BORDER)
        glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_T,GL_CLAMP_TO_BORDER)
        glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_R,GL_CLAMP_TO_BORDER)
        glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MAG_FILTER, GL_LINEAR)
        glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MIN_FILTER, GL_LINEAR)

        #now we need to convert the intensity data organized in 3D matrix into an RGBA buffer array (pixel1R, pixel1G, pixel1B, pixel1A, pixel2R....)
        channel_RGBA_buffer = numpy.zeros(tiff_mean.shape[0]*4, numpy.uint8)

        for i in range(0,tiff_mean.shape[0]):
            channel_RGBA_buffer[i*4] = tiff_mean[i]     #R
            channel_RGBA_buffer[i*4+1] = tiff_mean[i]   #G
            channel_RGBA_buffer[i*4+2] = tiff_mean[i]   #B
            channel_RGBA_buffer[i*4+3] = tiff_mean[i]   #A
            if numpy.mod(i,100000)==0:
                print('count %d',i)

        
        glTexImage3D(GL_TEXTURE_3D, 0, GL_RGBA16, shape_array[1], shape_array[2], shape_array[0], 0, GL_RGBA, GL_UNSIGNED_BYTE, channel_RGBA_buffer)
        glBindTexture( GL_TEXTURE_3D, 0 )

    def display(self):

        glClear( GL_COLOR_BUFFER_BIT  | GL_DEPTH_BUFFER_BIT )

        glEnable( GL_ALPHA_TEST )
        glAlphaFunc( GL_GREATER, 0.03)

        glEnable(GL_BLEND)
        glBlendFunc( GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA )

        glMatrixMode( GL_TEXTURE )
        glLoadIdentity()

        

        glEnable(GL_TEXTURE_3D);
        glBindTexture( GL_TEXTURE_3D, texture);
        dOrthoSize = 1
        for Indx in self.my_range(-1,1,0.01):
            TexIndex = round(Indx,2)
            glBegin(GL_QUADS)
            glTexCoord3f(0.0, 0.0, (TexIndex+1.0)/2.0)
            glVertex3f(-dOrthoSize,-dOrthoSize,TexIndex)
            glTexCoord3f(1.0, 0.0, (TexIndex+1.0)/2.0)
            glVertex3f(dOrthoSize,-dOrthoSize,TexIndex)
            glTexCoord3f(1.0, 1.0, (TexIndex+1.0)/2.0)
            glVertex3f(dOrthoSize,dOrthoSize,TexIndex)
            glTexCoord3f(0.0, 1.0, (TexIndex+1.0)/2.0)
            glVertex3f(-dOrthoSize,dOrthoSize,TexIndex);
            glEnd()

        glBindTexture( GL_TEXTURE_3D, texture )

    def my_range(self,start, end, step):
        while start <= end:
            yield start
            start += step

glutInit(sys.argv)
glutInitDisplayMode(GLUT_RGBA)

glutInitWindowSize(300,300)
window = glutCreateWindow(b"I really hope this works")


test111 = test_TIFF_render()
test111.load_texture()

glutDisplayFunc(test111.display)
glutMainLoop()

I am running Win7 64bit with Python3.3 .

5
  • Any reason in particular you are using GL_CLAMP_TO_BORDER yet have not defined a border color anywhere? Did you mean to use GL_CLAMP_TO_EDGE by chance? Commented Jun 10, 2014 at 2:00
  • I was naively following/porting this example to python ( codeproject.com/Articles/352270/… ) Commented Jun 10, 2014 at 18:04
  • 1
    Ah, okay. That is actually perfectly valid then and you can ignore my comment :) The default border color is black, so this will produce black texels for any coordinates sampled outside the range [0,1]. Assuming empty space is supposed to be black, that behavior is acceptable. GL_CLAMP_TO_EDGE would do something entirely different, it would sort of take the edge color and stretch it out to infinity. Ther is a nice visual summary of the wrap modes here (border color=red in this case). Commented Jun 10, 2014 at 18:11
  • that summary is more useful than a book chapter.' Commented Jun 10, 2014 at 18:29
  • however, I am still stuck with no rendering. Commented Jun 11, 2014 at 2:29

1 Answer 1

1

I haven't used numpy, but the way I read the documentation (http://docs.scipy.org/doc/numpy/reference/generated/numpy.zeros.html), numpy.zeros() produces an array of float64 values by default. You then pass GL_INT as the type argument for glTexImage3D(). It looks like we have a type mismatch here.

The typical data type to use for texture data is unsigned bytes. So I think it should look something like this:

channel_RGBA_buffer = numpy.zeros(tiff_mean.shape[0]*4, numpy.uint8)
...
glTexImage3D(GL_TEXTURE_3D, 0, GL_RGBA, shape_array[1], shape_array[2], shape_array[0], 0,
             GL_RGBA, GL_UNSIGNED_BYTE, channel_RGBA_buffer)

Edit, updating for new version of code: It looks like you know generate random float values between 0.0 and 1.0, cast them to byte (uint8) values, and then use that for the texture data. I believe this will result in all your byte values being 0, or maybe partly 1 depending on how the rounding works. But the value of the byte range is 0-255. You probably want to multiply by 255.0 when converting your random floats to the texture data:

channel_RGBA_buffer[i*4] = 255.0 * tiff_mean[i]     #R
channel_RGBA_buffer[i*4+1] = 255.0 * tiff_mean[i]   #G
channel_RGBA_buffer[i*4+2] = 255.0 * tiff_mean[i]   #B
channel_RGBA_buffer[i*4+3] = 255.0 * tiff_mean[i]   #A
Sign up to request clarification or add additional context in comments.

3 Comments

while I agree that is a major flaw in my code, there must be something else because this does not solve the problem. Does it have to be uint8? I thought I could use int16. I corrected the code in the main post but still no result.
You can have textures with 16-bit components. But you have to be careful that the format of the data you pass to glTexImage3D() matches the format parameter.
I updated the answer based on your latest code. Frankly, I'm only a very casual Python user, so I'm not always sure how it handles data types and conversions.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.