While this answer does not tell what is wrong with your approach, it presents a simpler way to render skyboxes.
Traditional way (textured cube)
A straightforward way for creating skyboxes is to render a textured cube centered to the camera position. Each face of the cube consists of two triangles and a 2D texture (or part of an atlas). Due to texture coordinates each face requires own vertices. This approach has problems in the seams of adjacent faces, where the texture values are not interpolated properly.
CubemapCube with cubemap texture
Like in the traditional way, a textured cube is rendered around the camera. Instead of using six 2D textures, a single cubemap texture is used. Because the camera is centered inside the cube, the vertex coordinates map one to one with the cubemap sampling vectors. Thus texture coordinates are not needed for the mesh data and the vertices can be shared between faces by using index buffer.
This approach also fixes the problem of seams when GL_TEXTURE_CUBE_MAP_SEAMLESS is enabled.
Simpler (better) way
When rendering a cube and the camera lies inside it, whole viewport gets filled. Up to five faces of the skybox can be partially visible at any time. The triangles of cube faces are projected and clipped to the viewport and cubemap sampling vectors are interpolated between the vertices. This work is unnecessary.
It's possible to fill a single quad filling the whole viewport and calculate the cubemap sampling vectors in the corners. Since the cubemap sampling vectors match the vertex coordinates, they can be calculated by unprojecting the viewport coordinates to the world space. This is the opposite of projecting world coordinates to the viewport and can be achieved by inverting the matrices. Also make sure you either disable z-buffer write or write a value that is far enough.
Below is the vertex shader that accomplishes this:
#version 330
uniform mat4 uProjectionMatrix;
uniform mat4 uWorldToCameraMatrix;
in vec4 aPosition;
smooth out vec3 eyeDirection;
void main() {
mat4 inverseProjection = inverse(uProjectionMatrix);
mat3 inverseModelview = transpose(mat3(uWorldToCameraMatrix));
vec3 unprojected = (inverseProjection * aPosition).xyz;
eyeDirection = inverseModelview * unprojected;
gl_Position = aPosition;
}
aPosition is the vertex coordinates {-1,-1; 1,-1; 1,1; -1,1}{-1,-1; 1,-1; 1,1; -1,1}. The shader calculates eyeDirection with the inverse of model-view-projection matrix. However the inversion is split for projection and world-to-camera matrices. This is because only the 3x3 part of the camera matrix should be used to eliminate the position of the camera. This aligns the camera to the center of the skybox. In addition as my camera doesn't have any scaling or shearing, the inversion can be simplified to transposiontransposition. The inversion of the projection matrix is a costly operation and could be precalculated, but as this code is executed by the vertex shader typically just four times per frame, it's usually a non-issue.
The fragment shader simply performs a texture lookup using eyeDirection vector:
#version 330
uniform samplerCube uTexture;
smooth in vec3 eyeDirection;
out vec4 fragmentColor;
void main() {
fragmentColor = texture(uTexture, eyeDirection);
}
Note that to get rid of the compatibility mode you need to replace textureCube with just texture and specify the output variable yourself.