This documentation is intended to instruct developers in the authoring of custom integrators. Developers should also consult the
RixIntegrator.h header file for complete details.
An integrator plugin is used to model the integration of camera rays. These plugins are responsible for taking primary camera rays as input from the renderer and performing some work with these rays. Usually this work involves tracing the rays through the scene, computing the lighting on the hit points, and sending integrated results to the display services.
Implementing the RixIntegrator Interface
RixIntegrator.h describes the interface that integrators must implement.
RixIntegrator is a subclass of
RixShadingPlugin, and therefore shares the same initialization, synchronization, and parameter table logic as other shading plugins. Integrators do not support lightweight instances, and therefore
CreateInstanceData() should not be overridden as any created instance data will not be ever returned to the
RixIntegrator. Therefore to start developing your own integrator, you can
#include "RixIntegrator.h" and make sure your integrator class implements the required methods inherited from the
RIX_INTEGRATORCREATE() macro defines the
CreateRixIntegrator() method, which is called by the renderer to create an instance of the integrator plugin. Generally, the implementation of this method should simply return a
new allocated copy of your integrator class. Similarly, the R
IX_INTEGRATORDESTROY() macro defines the
DestroyRixIntegrator() method called by the renderer to delete an instance of the integrator plugin; a typical implementation of this method is to
delete the passed in integrator pointer:
To facilitate the job of an integrator plugin, the renderer provides an integration context of type
RixIntegratorContext which contains information about the primary rays, pointers to implementations of display services and lighting services, and routines to trace rays against the renderer's geometric database.
Information about the primary camera rays are supplied via the
primaryRays fields of the
integratorCtxIndex links a given shading point with it associated primary ray: the shading point with index
i is associated with the ray
Ray tracing services are provided by the renderer via the
GetTransmission() methods provided on the
GetNearestHits() is invoked by an integrator to send rays against the geometric database and return the list of nearest ray hits. Two versions of this routine are provided.
- The first version returns its ray hits in the form of a list of
RixShadingContext. These shading contexts represent a collection of points which have had their associated Bxdfs fully executed and set up for sample evaluation and generation. Since a Bxdf evaluation may trigger an upstream evaluation of all input patterns, this call is considered to be very expensive as it invokes full shading.
- The second version of this call returns its ray hits in the form of a list of
RtHitGeometry. No shading contexts are set up in this routine, and only information about the geometric locale is returned. This version of the call is preferred if no shading needs to be performed, such as in the case of an occlusion-only integrator.
All the shading contexts that are returned by
GetNearestHits() must be explicitly released back to the renderer with the
ReleaseShadingContexts() method. However:
- shading contexts that are provided by the renderer to the integrator (through
- shading contexts returned to the renderer (through
do not need to be released. The renderer will take care of this when appropriate.
Shading contexts associated with a bxdf closure can use a consequent amount of memory, so it is recommended to release them as soon as they are not needed anymore. This is usually possible after the integrator is done evaluating or generating samples for these bxdfs.
GetTransmittance() can be invoked by an integrator to compute the transmittance between two points in space. This is of use for bidirectional path tracing applications where the transmittance between vertex connections needs to be computed. It may also be used for computing shadow rays if the lighting services cannot be used for this purpose for some reason.
Both methods above need to be provided with an array of
RtRayGeometry that have been properly initialized.
IntegrateRays() method is the primary entry point for this class invoked by the renderer. The implementation of this routine is expected to fire the list of active primary rays delivered via the
RixIntegratorContext& ictx parameter. This method has output parameters: the
shadingCtxs parameters are expected to be filled in with a list of primary shading contexts that are associated with the firing of the active primary rays. These primary shading contexts should not be released by the integrator.
An implementation of
IntegrateRays() may choose to ignore the camera rays supplied in the
RixIntegratorContext entirely, and shoot an entirely different set of rays. If it chooses to do so, it should be extremely careful about splatting with the display services, as those routines are set up to be indexed by the
integratorCtxIndex field of the original primary rays.
The default implementation supplied for
IntegrateRays() simply calls
RixIntegratorContext::GetNearestHits to trace the primary rays, and passes the associated shading context results to
Integrate(), which is the secondary entry point for this class.
Integrate() is never directly invoked by the renderer; it is provided as an override convenience for implementors that are content with the default behavior of
IntegrateRays. Following a call to
Integrate()), integrators are expected to provide results to the renderer via the display services.
The type of results depends on the integrator. Most integrators will at least trace camera rays and generate results that depend on the scene geometry, but this is not mandatory. Non-physically-based integrators may generate results that do not depend on object materials or lights (e.g.
PxrVisualizer), while physically-based integrators will usually simulate light transport, taking into account materials and light properties.
A physically-based integrator is expected to compute the amount of light coming from the camera ray hit toward the camera ray origin. This usually involves:
- Tracing camera rays provided to
IntegrateRays()(hits are returned as
- Computing direct lighting (i.e. light coming directly from light sources), which involves:
- Initializing lighting services for a given
- Generating light samples and evaluating the bxdf contribution for these
- Generating bxdf samples and evaluating the light contribution for these
- Initializing lighting services for a given
- Computing indirect lighting (i.e. light coming from non-light sources part of the scene), which involves:
- Generating bxdf samples and tracing indirect rays for each of these
Computing direct lighting for a given batch of points (encapsulated by a
RixShadingContext object) first requires initializing the lighting services.
Once lighting services have been initialized, it is possible to ask for light sample generation and evaluation. Note that bxdf sample generation and evaluation is available as soon as
RixShadingContext objects have been returned by
In a standard Multiple Importance Sampling computation, we need to
- generate bxdf samples and evaluate light contribution for each of them
- generate light samples and evaluate bxdf response for each of them
Note that because the bxdf API returns multiple-lobe results, we need to setup
RixBXLobeWeights objects beforehand (instead of dealing with a simple
RtColorRGB per sample). This requires setting up
RtColorRGB buffers of appropriate size.
Generating light samples, evaluating bxdf response
Generating bxdf samples, evaluating light contribution
Writing To The Display
Once the final contribution for a given shading point has been computed, the
RixDisplayServices API can be used to splat this contribution to the appropriate pixel. The integrators do not have direct access to the pixels, instead they have to provide the display services with the appropriate integrator context index (which can be found in
Random Number Generation
You can find more about using RixRNG here. This document will help you understand how to improve sampling strategies.
In addition to compute direct lighting (as described above), physically-based integrators also need to deal with indirect lighting. This is done by casting secondary rays from the camera hits, and performing a full lighting computing on the secondary hit points. Since this involves both computing direct and indirect lighting, this is a recursive process.
The integrator is responsible for creating secondary rays (usually using the bxdf to do so), and trace them by calling
RixIntegrator::GetNearestHits(). The integrator will then use the returned
RixShadingContext objects to compute direct and indirect lighting, similarly to what was done in
In order to get the directions and weights of the indirect rays, integrators should use the bxdf
GenerateSamples() method. Tracing indirect rays can be split into 3 steps.
Generating Bxdf Samples
Creating Indirect Rays
Trace Indirect Rays
The final pseudo-code for computing light transport is the following:
Ray Differentials & Ray Spreads
Ray differentials determine texture filter sizes and hence texture mipmap levels (and texture cache pressure in scenes with many textures).
In RIS the goal for ray differential computation was improved efficiency (over REYES), even if it's not going to give quite as accurate ray differentials in all cases. Auxiliary ray-hit shading points are no longer created, and we only compute an isotropic ray "spread" - not a full anisotropic set of ray differentials. The ray spread expresses how much the ray gets wider for every unit of distance it travels.
Camera Ray Spread
By default, the spread of camera rays is set up such that the radius of a camera ray is 1/4 pixel. The width of the camera ray is two times its radius, ie. 1/2 pixel. Footprints are constructed at ray hit points such that a camera ray hit footprint is 1/2 pixel wide. Equivalently, the area of a camera ray footprint is 1/4 pixel. (This is true independent of image resolution and perspective/orthographic projection.)
This choice of default camera ray spread has both a theoretical and a practical foundation. Theory: footprints that are 1/2 pixel wide match the Nyquist sampling limit. Practice: our experiments indicate that footprints smaller than 1/2 pixel wide do not sharpen the final image, but footprints wider than that do soften the final image. Moving to smaller than 1/2 pixel width is all pain (finer mipmap levels, more texture cache pressure), no gain (no image quality improvement). Moving to wider than 1/2 pixel is more subjective: some people prefer the sharp look, some prefer the softer look.
Reflected Ray Spread
For reflection we compute the reflected ray spread using two approaches:
Ray spread based on surface curvature. The ray spread for reflection from a curved smooth surface is simple to compute accurately using Igehy's differentiation formula:
Ray spread based on roughness (pdf). The ray spread from a flat rough surface depends on roughness: the higher the roughness the lower the pdf in a given direction; here we map the pdf to a ray spread using a heuristic mapping:
We set the overall ray spread to the max of these two.
This ray spread computation is done in the SetRaySpread() function (see RixIntegrator.h), which is called from the various RIS integrators. Integrator writers can easily make their own version of SetRaySpread() using other techniques and heuristics and call that from their integrators.
Ray Property Queries
RixBxdf or other shading plugins may want to query ray properties such as the ray depth or eye throughput, in order to allow for artistic control or optimization. For instance, as an optimization a
RixBxdf may want to skip the evaluation of a particularly expensive lobe, if the current ray depth of the hit point is beyond some arbitrary threshold.
Since it is the integrator that is best suited for tracking such ray properties, we require that user-authored integrators that would like to participate in such ray property queries to override the
GetProperty() routine and provide the necessary information as requested by a
RixBxdf or other shading plugin. Integrators that do not implement ray property queries should return
GetProperty(), and the caller that is attempting to ask the integrator for the property must recover gracefully by not implementing the optimization.
The definition of
enum RayProperty is in
RixShading.h, and matches the
GetProperty() call from
RixShadingContext; in fact, the implementation of
RixShadingContext::GetProperty() simply turns around and calls
RixIntegrator::GetProperty(). Implementors should expect that some rays may be invalid, as signalled by a
rayId value less than zero. It is the caller's responsibility to allocate the correct amount of storage (i.e. the implementor of the callback in
RixIntegrator does not need to allocate the memory). The expected output return values in
result for each value of
RayProperty are as follows:
resultis expected to be an
int *, and should be filled in with the current depth of the ray with matching
rayIdassociated with the current
IntegrateRaysinvocation. The implementor must check for
rayId< 0 and return a -1 depth if such a ray ID is encountered.
resultis expected to be
RixRNG::SampleCtx*, and should be filled in with a copy of the appropriate
RixRNG::SampleCtxthat ensures decent stratification results for the ray with matching
resultis expected to be RtColorRGB
*, and should be filled in with the current thruput to the eye of the ray with matching
rayIdassociated with the current
k_RayVolumeScatterCount: result is expected to be an int *, and should be filled in with the number of times a volume direct light scattering event has occurred during the current
IntegrateRaysinvocation for the given ray with matching
k_RayVolumeSampleCount: result is expected to be an int *, and should be filled in with the number of times a volume sample was taken during the current
IntegrateRaysinvocation for the given ray with matching