1

I'm trying to get my head around the whole system of binding multiple textures that contain different UV coordinate values, specifically for map tiling (repeating) in order to mix an albedo + normal map that contains different scalar values. To dynamically set multiple texture coordinates to one mesh would be beneficial in many scenarios - one being that normal map details would really stand out for things like skin, rusty metal ect. This would simply be acomplished by scaling the UV coord values to something relatively high, and leaving the albedo map scale set to 1.

How would one go about communicating these texture coord sets from C++ to a glsl shader, and what would be the most efficient / dynamic pipeline to practice for real-time rendering?

I've already tried to parse multiple arrays through a number of location layouts into GLSL, however this becomes too static for dynamic situations.

This is my vertex struct, however I only parse one set of vertex positions, and one set of texture coordinates:

    struct VertexElement {
     uint16_t               componentSize;
     std::vector<float>     data;

     VertexElement(uint16_t comp_size, std::vector<float> vertex_data)
     {
    componentSize = comp_size;
    data = vertex_data;
     }
    };

    struct Vertex {
     uint16_t stride;
     std::vector<VertexElement> vertexElements;

     inline Vertex() = default;
     inline Vertex(uint16_t _stride, std::vector<VertexElement> 
         _vertex_elements) 
         {
    stride = _stride;
    vertexElements = _vertex_elements;
     }
    };

Essentially, I'm looking to combine multiple textures containing completely different coordinate values without having to statically allocate in GLSL like this:

       location (layout = 0) in vec2 texCord0
       location (layout = 1) in vec2 texCord1        <----- BAD!!
       location (layout = 2) in vec2 texCord3
       ...

I've tried this however I need to readjust the texcoord set size dynamically for real-time editing. Is this possible in OpenGL?

Thanks.

3
  • So you want to have multiple texture coordinates per vertex, but you do not want to hard-code those texture coordinates in the shader layout? What about using an array of texcoords with a size integer in your layout? Commented Oct 28, 2019 at 20:33
  • "one being that normal map details would really stand out for things like skin, rusty metal ect. This would simply be acomplished by scaling the UV coord values to something relatively high, and leaving the albedo map scale set to 1." That doesn't make sense. The value of the texture coordinate doesn't affect the value you get from the texture, except in that it's selecting a value from a different location. That is, large UV values doesn't mean that the value fetched from the texture will "stand out". Any scaling should be applied to the value fetched from the texture. Commented Oct 28, 2019 at 22:55
  • Apologies for the subtle explanation - If you look at Substance Painter for example, it's possible to uniquely adjust the UV scale for individual textures in order to create better realism with the help of curvature and cavity maps. I was wondering if it was possible to achieve this using OpenGL with GLSL. Commented Oct 29, 2019 at 0:32

1 Answer 1

2

The interface between shaders is always statically defined. A shader has X inputs, always.

If you want some information to be determined dynamically, then you're going to have to use existing tools (image load/store, SSBOs, UBOs, etc) to read that information yourself. This is entirely possible for vertex shaders, since the gl_VertexID index can be used as an index into an array.

Generally speaking however, people don't need to do this. They simply define a particular vertex format and stick with that. Indeed, your specific use case doesn't seem to need this, since you're only changing the values of the texture coordinates, not whether they are being used or not.

I need to readjust the texcoord set size dynamically for real-time editing.

That's not really possible, particularly for the use case you outlined.

Consider a normal map. The difference between using a normal map and not using a normal map is substantial. Not merely in what parameters you're passing to the VS but the fundamental nature of your shader.

Normal maps usually are tangent-space. So your shader now needs a full NBT matrix. And you're probably going to have to pass that matrix to your fragment shader, which will use it to construct the normal from the normal map. Of course, if you don't use a normal map, you just get a per-vertex normal. So you're talking about substantial changes to the structure of your code, not just the inputs.

Shaders are just strings, so you are free to dynamically build them as you like. But that's going to be a lot more complex than just what inputs your VS gets.

Sign up to request clarification or add additional context in comments.

1 Comment

Thank you for stating the possibility of achieving this. I was hoping to essentially make a texcoord pipeline system similar to Unreal's Material Editor, where the user is able to adjust any texture UV scalar using a UV object node, individually editing the ST values in real-time. Maybe it's only possible with Direct3D seeing as Unreal tends to utilise this API the majority of the time. I guess I'll just have to create a work-around by optimising tiled textures in Photoshop, or creating an ID mapping system.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.