Skip to main content
Adding note on z sign
Source Link
DMGregory
  • 140.8k
  • 23
  • 257
  • 401

The model matrix converts from object space to world space, so it represents the position, rotation, and scale of the object relative to the world origin.

The view matrix converts from world space to view space, so it represents the position, rotation, and scale of your world relative to the viewpoint.

Put another way, if you create a "camera model matrix" that positions your virtual camera in your world (transforming from camera-relative coordinates to world coordinates), your view matrix is just the inverse of that matrix.

The reason you barely see anything with your current view matrix is that it represents a camera placed at the origin - it's inside the cube.

Try picking some point on a hemisphere a distance away from the cube, with the camera's local z axis rotated to point back at the cube (or away from the cube, if you're using the convention that the view space z axis points out of the screen, not into it, which it looks like you are). Make a matrix to represent that position and orientation, then invert it to make your view matrix. You should see something a little more interesting.

this should be handled in one of the matrices no?

No, what you have there is usually handled by another stage of the pipeline. It's conventional to arrange the projection matrix so that, after perspective divide, x and y coordinates of on-screen points are in the range -1 to +1, with 0 in the middle of the viewport. That makes it very easy to clip/cull off-screen triangles, since the clipping planes are always x = ±1, y = ±1, regardless of the size or aspect ratio of the screen/viewport. When we want to convert to pixel coordinates in the range 0 to pixelWidth and pixelHeight, we need to add one, halve, and multiply by the viewport size on that axis, as you're doing here. When using a rendering API, this part is handled automatically between clipping and rasterization, so we don't need to bake it into our matrices.

The model matrix converts from object space to world space, so it represents the position, rotation, and scale of the object relative to the world origin.

The view matrix converts from world space to view space, so it represents the position, rotation, and scale of your world relative to the viewpoint.

Put another way, if you create a "camera model matrix" that positions your virtual camera in your world (transforming from camera-relative coordinates to world coordinates), your view matrix is just the inverse of that matrix.

The reason you barely see anything with your current view matrix is that it represents a camera placed at the origin - it's inside the cube.

Try picking some point on a hemisphere a distance away from the cube, with the camera's local z axis rotated to point back at the cube. Make a matrix to represent that position and orientation, then invert it to make your view matrix. You should see something a little more interesting.

this should be handled in one of the matrices no?

No, what you have there is usually handled by another stage of the pipeline. It's conventional to arrange the projection matrix so that, after perspective divide, x and y coordinates of on-screen points are in the range -1 to +1, with 0 in the middle of the viewport. That makes it very easy to clip/cull off-screen triangles, since the clipping planes are always x = ±1, y = ±1, regardless of the size or aspect ratio of the screen/viewport. When we want to convert to pixel coordinates in the range 0 to pixelWidth and pixelHeight, we need to add one, halve, and multiply by the viewport size on that axis, as you're doing here. When using a rendering API, this part is handled automatically between clipping and rasterization, so we don't need to bake it into our matrices.

The model matrix converts from object space to world space, so it represents the position, rotation, and scale of the object relative to the world origin.

The view matrix converts from world space to view space, so it represents the position, rotation, and scale of your world relative to the viewpoint.

Put another way, if you create a "camera model matrix" that positions your virtual camera in your world (transforming from camera-relative coordinates to world coordinates), your view matrix is just the inverse of that matrix.

The reason you barely see anything with your current view matrix is that it represents a camera placed at the origin - it's inside the cube.

Try picking some point on a hemisphere a distance away from the cube, with the camera's local z axis rotated to point back at the cube (or away from the cube, if you're using the convention that the view space z axis points out of the screen, not into it, which it looks like you are). Make a matrix to represent that position and orientation, then invert it to make your view matrix. You should see something a little more interesting.

this should be handled in one of the matrices no?

No, what you have there is usually handled by another stage of the pipeline. It's conventional to arrange the projection matrix so that, after perspective divide, x and y coordinates of on-screen points are in the range -1 to +1, with 0 in the middle of the viewport. That makes it very easy to clip/cull off-screen triangles, since the clipping planes are always x = ±1, y = ±1, regardless of the size or aspect ratio of the screen/viewport. When we want to convert to pixel coordinates in the range 0 to pixelWidth and pixelHeight, we need to add one, halve, and multiply by the viewport size on that axis, as you're doing here. When using a rendering API, this part is handled automatically between clipping and rasterization, so we don't need to bake it into our matrices.

added 16 characters in body
Source Link
DMGregory
  • 140.8k
  • 23
  • 257
  • 401

The model matrix converts from object space to world space, so it represents the position, rotation, and scale of the object inrelative to the world origin.

The view matrix converts from world space to view space, so it represents the position, rotation, and scale of your world relative to the viewpoint.

Put another way, if you create a "camera model matrix" that positions your virtual camera in your world (transforming from camera-relative coordinates to world coordinates), your view matrix is just the inverse of that matrix.

The reason you barely see anything with your current view matrix is that it represents a camera placed at the origin - it's inside the cube.

Try picking some point on a hemisphere a distance away from the cube, with the camera's local z axis rotated to point back at the cube. Make a matrix to represent that position and orientation, then invert it to make your view matrix. You should see something a little more interesting.

this should be handled in one of the matrices no?

No, what you have there is usually handled by another stage of the pipeline. It's conventional to arrange the projection matrix so that, after perspective divide, x and y coordinates of on-screen points are in the range -1 to +1, with 0 in the middle of the viewport. That makes it very easy to clip/cull off-screen triangles, since the clipping planes are always x = ±1, y = ±1, regardless of the size or aspect ratio of the screen/viewport. When we want to convert to pixel coordinates in the range 0 to pixelWidth and pixelHeight, we need to add one, halve, and multiply by the viewport size on that axis, as you're doing here. When using a rendering API, this part is handled automatically between clipping and rasterization, so we don't need to bake it into our matrices.

The model matrix converts from object space to world space, so it represents the position, rotation, and scale of the object in the world.

The view matrix converts from world space to view space, so it represents the position, rotation, and scale of your world relative to the viewpoint.

Put another way, if you create a "camera model matrix" that positions your virtual camera in your world (transforming from camera-relative coordinates to world coordinates), your view matrix is just the inverse of that matrix.

The reason you barely see anything with your current view matrix is that it represents a camera placed at the origin - it's inside the cube.

Try picking some point on a hemisphere a distance away from the cube, with the camera's local z axis rotated to point back at the cube. Make a matrix to represent that position and orientation, then invert it to make your view matrix. You should see something a little more interesting.

this should be handled in one of the matrices no?

No, what you have there is usually handled by another stage of the pipeline. It's conventional to arrange the projection matrix so that, after perspective divide, x and y coordinates of on-screen points are in the range -1 to +1, with 0 in the middle of the viewport. That makes it very easy to clip/cull off-screen triangles, since the clipping planes are always x = ±1, y = ±1, regardless of the size or aspect ratio of the screen/viewport. When we want to convert to pixel coordinates in the range 0 to pixelWidth and pixelHeight, we need to add one, halve, and multiply by the viewport size on that axis, as you're doing here. When using a rendering API, this part is handled automatically between clipping and rasterization, so we don't need to bake it into our matrices.

The model matrix converts from object space to world space, so it represents the position, rotation, and scale of the object relative to the world origin.

The view matrix converts from world space to view space, so it represents the position, rotation, and scale of your world relative to the viewpoint.

Put another way, if you create a "camera model matrix" that positions your virtual camera in your world (transforming from camera-relative coordinates to world coordinates), your view matrix is just the inverse of that matrix.

The reason you barely see anything with your current view matrix is that it represents a camera placed at the origin - it's inside the cube.

Try picking some point on a hemisphere a distance away from the cube, with the camera's local z axis rotated to point back at the cube. Make a matrix to represent that position and orientation, then invert it to make your view matrix. You should see something a little more interesting.

this should be handled in one of the matrices no?

No, what you have there is usually handled by another stage of the pipeline. It's conventional to arrange the projection matrix so that, after perspective divide, x and y coordinates of on-screen points are in the range -1 to +1, with 0 in the middle of the viewport. That makes it very easy to clip/cull off-screen triangles, since the clipping planes are always x = ±1, y = ±1, regardless of the size or aspect ratio of the screen/viewport. When we want to convert to pixel coordinates in the range 0 to pixelWidth and pixelHeight, we need to add one, halve, and multiply by the viewport size on that axis, as you're doing here. When using a rendering API, this part is handled automatically between clipping and rasterization, so we don't need to bake it into our matrices.

Source Link
DMGregory
  • 140.8k
  • 23
  • 257
  • 401

The model matrix converts from object space to world space, so it represents the position, rotation, and scale of the object in the world.

The view matrix converts from world space to view space, so it represents the position, rotation, and scale of your world relative to the viewpoint.

Put another way, if you create a "camera model matrix" that positions your virtual camera in your world (transforming from camera-relative coordinates to world coordinates), your view matrix is just the inverse of that matrix.

The reason you barely see anything with your current view matrix is that it represents a camera placed at the origin - it's inside the cube.

Try picking some point on a hemisphere a distance away from the cube, with the camera's local z axis rotated to point back at the cube. Make a matrix to represent that position and orientation, then invert it to make your view matrix. You should see something a little more interesting.

this should be handled in one of the matrices no?

No, what you have there is usually handled by another stage of the pipeline. It's conventional to arrange the projection matrix so that, after perspective divide, x and y coordinates of on-screen points are in the range -1 to +1, with 0 in the middle of the viewport. That makes it very easy to clip/cull off-screen triangles, since the clipping planes are always x = ±1, y = ±1, regardless of the size or aspect ratio of the screen/viewport. When we want to convert to pixel coordinates in the range 0 to pixelWidth and pixelHeight, we need to add one, halve, and multiply by the viewport size on that axis, as you're doing here. When using a rendering API, this part is handled automatically between clipping and rasterization, so we don't need to bake it into our matrices.