0

I'm trying to use OpenGL for visibility testing for complex geometries. What I want to do is simple: assign each primitive an integer ID, and then count the number of pixels with that ID. That allows me to calculate the relative visible area of each primitive. (Ultimately, this will be expanded to some minor finite-element calculations on the visible area.)

My problem is this. I'm trying to read the output of a fragment shader into my application memory: specifically, the primitive ID output. I'm using QT 4.7.4 and its OpenGL wrapper classes. When I bind and enable a buffer (a "PixelPack" buffer), and attempt to read from the OpenGL buffer into the memory, it reports a successful read. But the values stored in the array are not what I expect- they're all 0, even though for testing purposes I've set the ID to 1 for all primitives.

Here's my fragment shader:

#version 130

in vec4 Color;
flat in uint VertId;

out vec4 FragColor;
out uint FragId;

void main()
{
   FragColor = Color;

   // Changed to simpler version for debugging.
   //   FragId = VertId;
   FragId = uint( 1 );
}

And here's my application code, with some irrelevant parts stripped off (test harness hookups, etc.):

#include <QtOpenGL/QGLShader>
#include <QtOpenGL/QGLShaderProgram>
#include <QtOpenGL/QGLBuffer>

using namespace std;

string loadSource( string file );

int
testSelfShadow::
shader( ostream& error )
{
   bool fail = false;

   // Create the OpenGL context.
   int argc = 0;
   char* argv;
   QApplication* app = new QApplication( argc, &argv );
   QGLWidget* widget = new QGLWidget();
   widget->makeCurrent();

   // Create the shader program.
   QGLShaderProgram* prog = new QGLShaderProgram();
   bool success = false;
   success = prog->addShaderFromSourceCode( QGLShader::Vertex,
                                            loadSource( "vertex.glsl" ).c_str() );     
   if ( ! success )
   {
      ErrorOStream msg;
      msg << "Error trying to load vertex shader into a shader program.\n"
          << prog->log().toStdString();
      throw ERRORLOG( msg.str() );
   }
   success = prog->addShaderFromSourceCode( QGLShader::Fragment,
                                            loadSource( "fragment.glsl" ).c_str() );   
   if ( ! success )
   {
      ErrorOStream msg;
      msg << "Error trying to load fragment shader into a shader program.\n"
          << prog->log().toStdString();
      throw ERRORLOG( msg.str() );
   }
   success = prog->link();

   if ( ! success )
   {
      ErrorOStream msg;
      msg << "Error trying to link shaders into a shader program.\n"
          << prog->log().toStdString();
      throw ERRORLOG( msg.str() );
   }

   prog->bind();

   // Create the buffer for vertex position.
   QGLBuffer* vBuf = new QGLBuffer( QGLBuffer::VertexBuffer );
   vBuf->create();
   vBuf->setUsagePattern( QGLBuffer::DynamicDraw );
   vBuf->bind();

   GLfloat vertices[] = {
      -1.0f, -1.0f, 0.0f, 1.0f,
      -1.0f, 0.0f, 0.0f, 1.0f,
      1.0f, 0.0f, 0.0f, 1.0f,
      1.0f, -1.0f, 0.0f, 1.0f,
      -1.0f, 0.0f, 0.1f, 1.0f,
      -1.0f, 1.0f, 0.1f, 1.0f,
      1.0f, 1.0f, 0.1f, 1.0f,
      1.0f, 0.0f, 0.1f, 1.0f };

   vBuf->allocate( vertices, sizeof( vertices ) );

   prog->setAttributeBuffer( "Vertex", GL_FLOAT, 0, 4, 0 );
   prog->enableAttributeArray( "Vertex" );

   // Create the buffer for Grayscale brightness value.
   // Not important for final program, just for debugging during
   // development.
   QGLBuffer* bBuf = new QGLBuffer( QGLBuffer::VertexBuffer );
   bBuf->create();
   bBuf->bind();

   GLfloat brightness[] = {
      1.0, 0.9, 0.8, 0.7,
      0.5, 0.4, 0.3, 0.2 
 };

   bBuf->allocate( brightness, sizeof( brightness ) );

   prog->setAttributeBuffer( "Brightness", GL_FLOAT, 0, 1, 0 );
   prog->enableAttributeArray( "Brightness" );

   // Create the buffer for polygon ID.
   QGLBuffer* idBuf = new QGLBuffer( QGLBuffer::VertexBuffer );
   idBuf->create();
   idBuf->bind();
   GLuint polyId[] = {
      1, 1, 1, 1, 
      2, 2, 2, 2
   };

   idBuf->allocate( polyId, sizeof( polyId ) );
   prog->setAttributeBuffer( "PolyId", GL_UNSIGNED_INT, 0, 1, 0 );
   prog->enableAttributeArray( "PolyId" );

   // Create the index buffer.  Not trying to do any optimization
   // here yet.
   QGLBuffer* iBuf = new QGLBuffer( QGLBuffer::IndexBuffer );
   iBuf->create();
   iBuf->bind();
   GLuint indices[] = {
      0, 1, 2, 3, 4, 5, 6, 7
   };
   iBuf->setUsagePattern( QGLBuffer::StaticDraw );
   iBuf->allocate( indices, sizeof( indices ) );

   // Create the buffer for reading back polygon id per fragment.
   QGLBuffer* fBuf = new QGLBuffer( QGLBuffer::PixelPackBuffer );
   fBuf->create();
   fBuf->setUsagePattern( QGLBuffer::DynamicRead );
   fBuf->bind();
   fBuf->allocate( 640 * 480 * sizeof( GLuint ) );

   prog->setAttributeBuffer( "FragId", GL_UNSIGNED_INT, 0, 1, 0 );
   prog->enableAttributeArray( "FragId" );

   GLuint* fBufData = new GLuint[ 640 * 480 * sizeof( GLuint ) ];

   glDrawElements( GL_QUADS, 8, GL_UNSIGNED_INT, 0 );
   widget->show();
   widget->updateGL();

   // Trying this two different ways; neither way works.
   bool readSuccess = fBuf->read( 0, fBufData, 640 * 480 * sizeof( GLuint ) );
   GLuint* fBufMap = 
      static_cast< GLuint* >( fBuf->map( QGLBuffer::ReadOnly ) );

   cout << "\n"
        << "Read Successful: " << readSuccess << endl;
   cout << "Buffer map location and sample data: " 
        << fBufMap << ":" << fBufMap[640] << endl;
   cout << "Read data pointer: " << fBufData << endl;
   cout << "Sample fragment ID: " << fBufData[ 640 * 480 / 2 ] << endl;

   app->exec();

   return fail;
}

Here are sample outputs for a program run:

Read Successful: true
Buffer map location and sample data: 0x5a5d9000:0
Read data pointer: 0x59e48008
Sample fragment ID: 0

That's not what I would expect. I would expect all fragment IDs to be 1, because I explicitly set FragId = uint( 1 ) in the fragment shader. Am I setting up my reads wrong? Am I doing something wrong in my binding of buffers, or enabling the names?

I would prefer to use QT code if possible, for reasons beyond the scope of this question.

3
  • You know, that there's the occlusion query, that does exactly what you want to do manually. After rendering a primitive it tells you, how much of the fragments produced got occluded / drawn to the framebuffer. I think you better use that instead of manually counting pixels. Commented Feb 2, 2013 at 13:57
  • Interesting! Are you referring to opengl.org/registry/specs/ARB/occlusion_query.txt? That might do in a pinch. The plan was to do some math with a normal vector in the shader, so it'd still be ideal to be able to read outputs from the fragment shader. But if I just had occlusion queries, I suppose I could do the vector math on the CPU, though it'd be slower. Commented Feb 2, 2013 at 17:42
  • Yes, I'm referring to that, but it's become core functionality for later versions of OpenGL see opengl.org/sdk/docs/man/xhtml/glGenQueries.xml and following links. For occlusion query to work you will still draw to a framebuffer, so you can do the math in the shader, as occlusion query just collects statistics and won't change the rendering outcome. It allows you to save some significant cycles on the CPU, though. Commented Feb 2, 2013 at 17:48

1 Answer 1

1

There's so much Qt stuff in here that it's almost impossible to find actual OpenGL calls. But you seem to have two problems:

  1. You're rendering to the screen. Your screen uses some kind of normalized integer image format. Which basically means "float, but takes up 8 bits". You're writing integers from your shader. These don't match. Therefore, your rendering yields undefined behavior.

    What you need to do is render to an FBO that contains a GL_R8UI texture. Then your uint fragment shader output type will match your buffer. You will probably want a depth buffer too.

  2. You never actually read the pixel data. QGLBuffer::read reads from the buffer object. But you haven't put anything into the buffer object yet. You never told OpenGL to copy the framebuffer data you rendered and store it in the buffer object. You need to do that first; after doing that, then you can read from it.

    After you render to your FBO, you need to call glReadPixels. When you do that, you need to provide the correct pixel transfer parameters for what you've rendered. Namely, you need to use GL_RED_INTEGER for the format and GL_UNSIGNED_BYTE for the type. And since you're reading into a pixel buffer, you need to make sure it is bound before reading.

Sign up to request clarification or add additional context in comments.

2 Comments

Thanks for the help, but this confuses me. 1) The FragColor output is the color output (3D float), and this renders correctly on screen. 2) Why don't I want to read from the buffer object? The buffer object is in the GPU. I've already attempted to connect the output variable "FragId" in the shader to this buffer. Does this not do what I think it does? It's worked fine for the input variables.
I guess I ran out of time to edit my comment. Addendum to part 1: ... this renders correctly on screen. GLSL automatically renders the first float vec4 to screen, as I understand it; it doesn't try to render later outputs. Can't I access multiple output variables from a fragment shader?

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.