a transformation matrix that represents how the camera has been transformed in the world space
the camera's horizontal field of view in radians, measured as the angle from the left of the visible screen to the right; acceptable values are 0 < fieldOfView < Pi
the width of the output image plane
the height of the output image plane
the supersampling factor, in each direction
Determines if a certain screen-space bounding box is contained within the camera's view frustum, at least partially.
Determines if a certain screen-space bounding box is contained within the camera's view frustum, at least partially. The bounding box's x- and y-components should be in screen space, whereas the z-component should be the z-depth of the box. This will also cull objects behind the camera (positive z-depth).
the bounding box to check
the visibility of the bounding box
Estimates whether the z-buffer occludes a projected bounding box.
Estimates whether the z-buffer occludes a projected bounding box. The projected bounding box's x- and y-components should be in screen space, whereas the z-component should be in scene world space.
Note: the supersampling camera will multiply the bounding box x- and y-coordinates (but not the z-depth) by the supersampling factor.
the bounding box to check
the z-buffer to check
whether the bounding box is occluded
The length from the camera pinhole to the image plane.
The length from the camera pinhole to the image plane.
The dimensions, in pixel width and height, of the image returned by the rendering functions.
The dimensions, in pixel width and height, of the image returned by the rendering functions. If the images returned by the rendering function are larger; they will be downscaled. By default, this is simply the camera's width and height. Subclasses can override this function to downsample rendered images.
the preferred image dimensions
Transforms a point from world space into camera space, and then projects it into the screen space defined by u=0.
Transforms a point from world space into camera space, and then projects it into the screen space defined by u=0..width and v=0..height, with the origin at the lower-left corner of the screen. The z-component is the z-depth of the point.
To transform without projecting, use transformToCamera instead.
the vector to project
the projected vector
Renders all of the surfaces in a list.
Renders all of the surfaces in a list. This function will split, dice, shade, and rasterize the split surfaces using the given camera.
the root node of the scene graph
whether to only run displacement shaders in the shading step without running any color shaders
a function to run each time a portion of the image is done rendering
the resultant rendered texture and z-buffer
Splits primitive surfaces into smaller subsurfaces until they clear the size threshold when projected.
Splits primitive surfaces into smaller subsurfaces until they clear the size threshold when projected.
The result is that objects that are closer to the camera will be split into more partitions.
Note: the splitting stage is performed on the normal-scale proxy camera.
the surfaces to split
a list containing the split surfaces, prepared for the dicing step and sorted by increasing z-depth
Splits primitive surfaces into smaller subsurfaces until they clear the size threshold when projected.
Splits primitive surfaces into smaller subsurfaces until they clear the size threshold when projected. The result is that objects that are closer to the camera will be split into more partitions.
the root node of the scene graph
a list containing the split surfaces, prepared for the dicing step and sorted by increasing z-depth
Creates a camera from this projection.
Creates a camera from this projection.
the new camera
Transforms a point from world space into camera space.
Transforms a point from world space into camera space. However, the function does not then transform the point into the 2D screen space; the resultant coordinate is still in a 3D space.
To also project the point into screen space, use projectToScreen instead.
the world space point to transform
the transformed vector in camera space
Like transformToCamera, but normalizes the resulting vector.
Like transformToCamera, but normalizes the resulting vector. This is useful for obtaining the vector from the focal point to the given point.
the world space point to transform
the transformed and normalized vector in camera space
Transforms a point from camera space back into world space.
Transforms a point from camera space back into world space.
the camera space point to transform
the transformed vector in world space
The transformation matrix that converts world to camera coordinates.
The transformation matrix that converts world to camera coordinates. This is the same as the inverse of the cameraTransform parameter.
A camera that supersamples when rendering the final image. Up to the split stage, the camera acts like a normal camera. After the split stage, the number of pixels sampled is augmented by the supersampling factor.