Page tree

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 4 Next »

Contents

Introduction

This documentation is intended to instruct developers in the authoring of custom integrators. Developers should also consult the RixIntegrator.h header file for complete details.

An integrator plugin is used to model the integration of camera rays. These plugins are responsible for taking primary camera rays as input from the renderer and performing some work with these rays. Usually this work involves tracing the rays through the scene, computing the lighting on the hit points, and sending integrated results to the display services.

Implementing the RixIntegrator Interface

RixIntegrator.h describes the interface that integrators must implement. RixIntegrator is a subclass of RixShadingPlugin, and therefore shares the same initializationsynchronization, and parameter table logic as other shading plugins. Integrators do not support lightweight instances, and therefore CreateInstanceData() should not be overridden as any created instance data will not be ever returned to the RixIntegrator. Therefore to start developing your own integrator, you can #include "RixIntegrator.h" and make sure your integrator class implements the required methods inherited from the RixShadingPlugin interface: Init()Finalize()Synchronize(), and GetParamTable().

The RIX_INTEGRATORCREATE() macro defines the CreateRixIntegrator() method, which is called by the renderer to create an instance of the integrator plugin. Generally, the implementation of this method should simply return a new allocated copy of your pattern class. Similarly, the RIX_INTEGRATORDESTROY() macro defines the DestroyRixIntegrator()  method called by the renderer to delete an instance of the integrator plugin; a typical implementation of this method is to delete the passed in integrator pointer:

  
RIX_PATTERNCREATE
{
return new MyIntegrator();
}
    RIX_PATTERNDESTROY
{
delete ((MyIntegrator*)integrator);
}

Integration Context

To facilitate the job of an integrator plugin, the renderer provides an integration context of type RixIntegratorContext which contains information about the primary rays, pointers to implementations of display services and lighting services, and routines to trace rays against the renderer's geometric database.

Primary Rays

Information about the primary camera rays are supplied via the numRaysnumActiveRays, and primaryRays fields of the RixIntegratorContext

TODO: Flesh this out if warranted.

Ray Tracing

Ray tracing services are provided by the renderer via the GetNearestHits and GetTransmission methods provided on the RixIntegratorContext.

GetNearestHits() is invoked by an integrator to send rays against the geometric database and return the list of nearest ray hits. Two versions of this routine are provided.

  • The first version returns its ray hits in the form of a list of RixShadingContext. These shading contexts represent a collection of points which have had their associated Bxdfs fully executed and set up for sample evaluation and generation. Since a Bxdf evaluation may trigger an upstream evaluation of all input patterns, this call is considered to be very expensive as it invokes full shading. Any shading contexts that are returned by this routine must be released back to the renderer with the ReleaseShadingContexts() method after the integrator is done evaluating or generating samples on the associated Bxdfs of the shading contexts.
  • The second version of this call returns its ray hits in the form of a list of RtHitGeometry. No shading contexts are set up in this routine, and only information about the geometric locale is returned. This version of the call is preferred if no shading needs to be performed, such as in the case of an occlusion-only integrator.

GetTransmittance() can be invoked by an integrator to compute the transmittance between two points in space. This is of use for bidirectional path tracing applications where the transmittance between vertex connections needs to be computed. It may also be used for computing shadow rays if the lighting services cannot be used for this purpose for some reason.

TODO: Document the RtRayGeometry class.

Integration

The IntegrateRays() method is the primary entry point for this class invoked by the renderer. The implementation of this routine is expected to fire the list of active primary rays delivered via the RixIntegratorContext ictx parameter. This method has output parameters: the numShadingCtxs and shadingCtxs parameters are expected to be filled in with a list of primary shading contexts that are associated with the firing of the active primary rays. These primary shading contexts should not be released by the integrator.

An implementation of IntegrateRays() may choose to ignore the camera rays supplied in the RixIntegratorContext entirely, and shoot an entirely different set of rays. If it chooses to do so, it should be extremely careful about splatting with the display services, as those routines are set up to be indexed by the integratorCtxIndex field of the original primary rays.

The default implementation supplied for IntegrateRays() simply calls RixIntegratorContext::GetNearestHits to trace the primary rays, and passes the associated shading context results to Integrate(), which is the secondary entry point for this class. Integrate() is never directly invoked by the renderer; it is provided as an override convenience for implementors that are content with the default behavior of IntegrateRays.

TODO: Describe a "typical" implementation of Integrate(), or see the breakout by sections below.

Direct Lighting

TODO: Give example of how to run Bxdf Evaluate and Generate in a simple path tracer, and how to use the lighting services in an integrator to combine samples to perform direct lighting.

Writing To The Display

TODO: Show example of how to use the display services in an integrator to do splatting.

Random Number Generation

TODO: Discuss RixIntegratorContext::rngCtx and how to correctly use to make sure stratification happens correctly. Discuss trajectory splitting.

Indirect Rays

TODO: Describe how one would typically use the GetNearestHits methods to advance to the next wavefront. Describe how this ties in to Bxdfs GenerateSamples.

TODO: Describe how to integrate volumes into this framework (probably leave this for jfong).

Ray Differentials & Ray Spreads

Ray differentials determine texture filter sizes and hence texture mipmap levels (and texture cache pressure in scenes with many textures).

In RIS the goal for ray differential computation was improved efficiency (over REYES), even if it's not going to give quite as accurate ray differentials in all cases. Auxiliary ray-hit shading points are no longer created, and we only compute an isotropic ray "spread" - not a full anisotropic set of ray differentials. The ray spread expresses how much the ray gets wider for every unit of distance it travels.

Camera Ray Spread

By default, the spread of camera rays is set up such that the radius of a camera ray is 1/4 pixel. The width of the camera ray is two times its radius, ie. 1/2 pixel. Footprints are constructed at ray hit points such that a camera ray hit footprint is 1/2 pixel wide. Equivalently, the area of a camera ray footprint is 1/4 pixel. (This is true independent of image resolution and perspective/orthographic projection.)

This choice of default camera ray spread has both a theoretical and a practical foundation. Theory: footprints that are 1/2 pixel wide match the Nyquist sampling limit. Practice: our experiments indicate that footprints smaller than 1/2 pixel wide do not sharpen the final image, but footprints wider than that do soften the final image. Moving to smaller than 1/2 pixel width is all pain (finer mipmap levels, more texture cache pressure), no gain (no image quality improvement). Moving to wider than 1/2 pixel is more subjective: some people prefer the sharp look, some prefer the softer look.

Reflected Ray Spread

For reflection we compute the reflected ray spread using two approaches:

  1. Ray spread based on surface curvature.

    The ray spread for reflection from a curved smooth surface is simple to compute accurately using Igehy's differentiation formula:

    spread' = spread + 2*curvature*PRadius
    
  2. Ray spread based on roughness (pdf).

    The ray spread from a flat rough surface depends on roughness: the higher the roughness the lower the pdf in a given direction; here we map the pdf to a ray spread using a heuristic mapping:

    spread' = c * 1/sqrt(pdf) -- with c = 1/8
    

We set the overall ray spread to the max of these two.

This ray spread computation is done in the SetRaySpread() function (see RixIntegrator.h), which is called from the various RIS integrators. Integrator writers can easily make their own version of SetRaySpread() using other techniques and heuristics and call that from their integrators.