Date: Thu, 28 Mar 2024 20:20:27 +0000 (UTC) Message-ID: <75086068.8158.1711657227097@ip-10-0-0-233.us-west-2.compute.internal> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_8157_121763707.1711657227096" ------=_Part_8157_121763707.1711657227096 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: file:///C:/exported.html
This documentation is intended to instruct developers in the authoring o=
f custom integrators. Developers should also consult the =
;RixIntegrator.h
header file for complete details.
An integrator plugin is used to model the integration of camera rays. Th= ese plugins are responsible for taking primary camera rays as input from th= e renderer and performing some work with these rays. Usually this work invo= lves tracing the rays through the scene, computing the lighting on the hit = points, and sending integrated results to the display services.
RixIntegrator.h
describes the interface that integrato=
rs must implement. RixIntegrato=
r
is a subclass of =
RixShadingPlugin
, and th=
erefore shares the same initialization=
a>, synchronization, and parameter table logic as other shading plu=
gins. Integrators do not support lightweight instances, and therefore =
CreateInstanceData()
sho=
uld not be overridden as any created instance data will not be ever returne=
d to the RixIntegrator
. =
Therefore to start developing your own integrator, you can #incl=
ude "RixIntegrator.h"
and make sure your integrator class imple=
ments the required methods inherited from the RixShadingPlugin=
code> interface:
Init()
, Finalize()=
, Synchronize()
, and GetPara=
mTable()
.
The RIX_INTEGRATORCREATE()
macro defines the =
;CreateRixIntegrator()
method, which is called by t=
he renderer to create an instance of the integrator plugin. Generally, the =
implementation of this method should simply return a new
&=
nbsp;allocated copy of your integrator class. Similarly, the R=
IX_INTEGRATORDESTROY()
macro defines the DestroyRixI=
ntegrator()
method called by the renderer to delete =
an instance of the integrator plugin; a typical implementation of this meth=
od is to delete
the passed in integrator pointer:
RIX_= INTEGRATORCREATE { return new MyIntegrator(); } RIX_INTEGRATORDESTROY { delete ((MyIntegrator*)integrator); }
To facilitate the job of an integrator plugin, the renderer provides an =
integration context of type RixIntegratorContext
=
which contains information about the primary rays, pointers to implem=
entations of display services=
a> and lighting services, and r=
outines to trace rays against the renderer's geometric database.
Information about the primary camera rays are supplied via the numActiveRays
, and primaryRa=
ys
fields of the RixIntegratorContext
.
The RixShadingContext
member int*
&nb=
sp;integratorCtxIndex
links a given shading point with it=
associated primary ray: the shading point with index i is associated with the ray
primaryRays[integratorCtxIndex[i]=
]
.
Ray tracing services are provided by the renderer via the Get=
NearestHits()
and GetTransmission()
methods provid=
ed on the RixIntegratorContext
.
GetNearestHits()
is invoked by an int=
egrator to send rays against the geometric database and return the list of =
nearest ray hits. Two versions of this routine are provided.
RtHitGeometry
. No shading contexts are set up in t=
his routine, and only information about the geometric locale is returned. T=
his version of the call is preferred if no shading needs to be performed, s=
uch as in the case of an occlusion-only integrator.All the shading contexts that are returned by GetNearestHits() must be explicitly released back to the renderer with the
me=
thod. However:
RixIntegrator::Integrate()
)RixIntegr=
ator::IntegrateRays()
do not need to be released. The renderer will take care of this when app= ropriate.
Shading contexts associated with a bxdf closure can use a consequent amo= unt of memory, so it is recommended to release them as soon as they are not= needed anymore. This is usually possible after the integrator is done eval= uating or generating samples for these bxdfs.
GetTransmittance()
can be invoked by =
an integrator to compute the transmittance between two points in space. Thi=
s is of use for bidirectional path tracing applications where the transmitt=
ance between vertex connections needs to be computed. It may also be used f=
or computing shadow rays if the l=
ighting services cannot be used for this purpose for some reason.<=
/p>
Both methods above need to be provided with an array of RtRayGeometry
that have been=
properly initialized.
The IntegrateRays()
method is the primary entry point =
for this class invoked by the renderer. The implementation of this routine =
is expected to fire the list of active primary rays delivered via the =
RixIntegratorContext& ictx
parameter. This method has=
output parameters: the numShadingCtxs
and shadingC=
txs
parameters are expected to be filled in with a list of prim=
ary shading contexts that are associated with the firing of the active prim=
ary rays. These primary shading contexts should not be released by the inte=
grator.
An implementation of IntegrateRays()
may choose to ignore t=
he camera rays supplied in the RixIntegratorContext
entirely, =
and shoot an entirely different set of rays. If it chooses to do so, it sho=
uld be extremely careful about splatting with the display services, as thos=
e routines are set up to be indexed by the integratorCtxIndex
=
field of the original primary rays.
The default implementation supplied for IntegrateRays()
&nbs=
p;simply calls RixIntegratorContext::GetNearestHits
=
to trace the primary rays, and passes the associated shading context result=
s to Integrate()
, which is the secondary entry point for =
this class. Integrate()
is never directly invoked by the =
renderer; it is provided as an override convenience for implementors that a=
re content with the default behavior of IntegrateRays
. Followi=
ng a call to IntegrateRays()
=
(or Integrate()
), integr=
ators are expected to provide results to the renderer via the display servi=
ces.
The type of results depends on the integrator. Most integrators will at =
least trace camera rays and generate results that depend on the scene geome=
try, but this is not mandatory. Non-physically-based integrators may genera=
te results that do not depend on object materials or lights (e.g. =
;PxrVisualizer
), while physically-based integrators will usual=
ly simulate light transport, taking into account materials and light proper=
ties.
A physically-based integrator is expected to compute the amount of light= coming from the camera ray hit toward the camera ray origin. This usually = involves:
IntegrateRays()
(hits=
are returned as RixShadingContext
objects).RixShadingContext=
Computing direct lighting for a given batch of points (encapsulated by a=
RixShadingContext
object) first requires initializing th=
e lighting services.
RixLigh= tingServices* lightingServices =3D integratorContext.GetLightingServices(); RixBXEvaluateDomain evalDomain =3D k_RixBXBoth; RixLightingServices::Mode lsvcMode =3D RixLightingServices::k_IgnoreFixedSa= mpleCount; int fixedSampleCount =3D 0; int indirectSamples =3D 1; lightingServices->Begin(&shadingContext, &rixRNG, evalDomain, RixLightingServices::k_MaterialAndLightSamples, lsvcMode, RixLightingServices::SampleMode(), // defaults &fixedSampleCount, totalDepth, indirectSamples); // // computeDirectLighting(...) // lightingServices->End();
Once lighting services have been initialized, it is possible to ask for =
light sample generation and evaluation. Note that bxdf sample generation an=
d evaluation is available as soon as RixShadingContext
ob=
jects have been returned by GetNearestHits()
.
In a standard Multiple Importance Sampling computation, we need to
Note that because the bxdf API returns multiple-lobe results, we need to=
setup RixBXLobeWeights
objects beforehand (instead of de=
aling with a simple RtColorRGB
per sample). This requires=
setting up RtColorRGB
buffers of appropriate size.
// numL= ightSamples is the number of light samples generated for *each* shading poi= nt. // Currently, this value is used for all points in the current shading cont= ext. // RixLightingServices::GenerateLightSamples() fills array parameters with = the first // sample for all shading points first, then the second sample, and so on..= . // m_ClDiffuse, m_ClSpecular, m_ClUser are arrays of RtColorRGB buffers. Ea= ch buffer is of size // numLightSamples * numPoints. They will store the generated light sample= s. RixBXLobeWeights lightContributions( numLightSamples * numPoints, m_numPotentialDiffuseLobes, m_numPotentialSpecularLobes, m_numPotentialUserLobes, m_ClDiffuse, m_ClSpecular, m_ClUser); // m_diffuse, m_specular, m_user are arrays of RtColorRGB buffers. Each buf= fer is of size // numLightSamples * numPoints. They will store the bxdf contribution for e= ach light sample. RixBXLobeWeights evaluatedMaterialContributions( numLightSamples * numPoints, m_numPotentialDiffuseLobes, m_numPotentialSpecularLobes, m_numPotentialUserLobes, m_diffuse, m_specular, m_user); // For additional description of the call parameters, see RixLightingServic= es API. lightingSvc->GenerateSamples( numLightSamples, &rixRNG, lightGroupIds, lightLpeTokens, directions= , lightNormals, distance,=20 &lightContributions, transmission, nullptr, lightPdf,=20 lobesWanted, &evaluatedMaterialContributions, evaluatedMaterialFPdf= , evaluatedMaterialRPdf, lobesEvaluated, nullptr, throughput); // We don't need to make an explicit call to the bxdf's EvaluateSamples(), = because the lighting // services have done it for us, since we provided them with 'evaluatedMate= rialContributions'.
// numB= xdfSamples is the number of bxdf samples generated for *each* shading point= . // Currently, this value is used for all points in the current shading cont= ext. RixBXLobeWeights bxdfContributions( numBxdfSamples* numPoints, m_numPotentialDiffuseLobes, m_numPotentialSpecularLobes, m_numPotentialUserLobes,=20 m_diffuse, m_specular, m_user); // The RixBxdf GenerateSample API is single-sample (per shading point), so = when dealing with // multiple bxdf samples, we need to wrap it inside a loop. for (int bs =3D 0; bs < numBxdfSamples; bs++) { int offset =3D bs * numPoints; // Changing the offset of the lobe weights will write into the lobe wei= ghts at the appropriate // offset for this set of bxdf samples. bxdfContribution.SetOffset(offset); bxdf.GenerateSample(k_RixBXDirectLighting, lobesWanted, &rixRNG, lobeSampled + offset, directions + offset, bxdfContributions, materialFPdf + offset, materialRPdf + offset, nullptr); for (int i =3D 0; i < numPoints; i++) distances[offset + i] =3D 1e20= f; incRNG(shadingContext); } // Reset the offset of the lobe weights back to zero for the code below. bxdfContributions.SetOffset(0); RixBXLobeWeights lightContributions( numBxdfSamples * numPoints, m_numPotentialDiffuseLobes, m_numPotentialSpecularLobes, m_numPotentialUserLobes,=20 m_ClDiffuse, m_ClSpecular,=20 m_ClUser); lightingSvc->EvaluateSamples( // inputs numBxdfSamples, &rixRNG, directions, distances, materialFPdf, &= bxdfContributions, lobeSampled, // outputs lightGroupIds, lightLpeTokens, &lightContributions, transmission, n= ullptr, lightPdf, nullptr, nullptr, throughput);
Once the final contribution for a given shading point has been computed,=
the RixDisplayServi=
ces
API can be used to splat this contribution to the appropriat=
e pixel. The integrators do not have direct access to the pixels, instead t=
hey have to provide the display services with the appropriate integrator co=
ntext index (which can be found in RixShadingContext::integrator=
CtxIndex
).
// Writ= ing to display services. 'ciChannelId' is the id associated with the 'Ci' c= hannel. RixDisplayServices* displayServices =3D integratorContext.GetDisplayService= s(); // These point to the final contribution and alpha values we want to splat = to the pixels. RtColorRGB* finalContributions =3D ...; // of size shadingContext->numP= ts RtColorRGB* finalAlpha =3D ...; // of size shadingContext->numP= ts for (int i =3D 0; i < shadingContext.numPts; i++) { displaySvc->Splat(ciChannelId, shadingContext.integratorCtxIndex[i],= finalContributions[i]); displaySvc->WriteOpacity(ciChannelId, shadingContext.integratorCtxIn= dex[i], finalAlpha[i]); }
You can find more about using RixRNG here. This document will help y= ou understand how to improve sampling strategies.
In addition to compute direct lighting (as described above), physically-= based integrators also need to deal with indirect lighting. This is done by= casting secondary rays from the camera hits, and performing a full lightin= g computing on the secondary hit points. Since this involves both computing= direct and indirect lighting, this is a recursive process.
The integrator is responsible for creating secondary rays (usually using=
the bxdf to do so), and trace them by calling RixIntegrator::Ge=
tNearestHits()
. The integrator will then use the returned RixShadingContext
objects to compute direct and indirect lighting, =
similarly to what was done in RixIntegrator::Integrate()
.=
In order to get the directions and weights of the indirect rays, integra=
tors should use the bxdf GenerateSamples()
method. Tracin=
g indirect rays can be split into 3 steps.
RixBXLo= beWeights lw( numIndirectSamples * numPoints, m_numPotentialDiffuseLobes, m_numPotentialSpecularLobes, m_numPotentialUserLobes, m_diffuse, m_specular, m_user); // Generate the indirect ray directions based on the bxdf. for (int bs =3D 0; bs < numIndirectSamples; bs++) { int offset =3D bs * numPoints; // Changing the offset of the lobe weights will write into the lobe wei= ghts at the appropriate // offset for this set of bxdf samples. lw.SetOffset(offset); bxdf.GenerateSample( k_RixBXIndirectLighting, m_lobesWanted, &rng, m_lobeSampled + offset, m_directions + offset, lw, m_FPdf + offset, m_RPdf + offset, nullptr); for (int i =3D 0; i < npoints; i++) m_distances[offset + i] =3D 1e20= ; } // Resets the offset of the lobe weights back to zero for the code below. lw.SetOffset(0);
// Init= ializes rays to be traced. We may not have to trace as many rays as bxdf sa= mples were // generated, since some of the bxdf weights may be zero, or we may use rus= sian roulette, so we keep // a count of the rays to process. int currentRay =3D 0; for (int bs =3D 0; bs < numIndirectSamples; bs++) { for (int i =3D 0; i < numPoints; i++) { int sampleIndex =3D bs * numPoints + i; int rayId =3D sCtx.rayId[i]; RtRayGeometry& ray =3D m_rays[currentRay]; ray.origin =3D bias(P[i], Ngn[i], m_directions[sampleIndex], biasVa= lue); ray.maxDist =3D m_distances[sampleIndex]; ray.rayId =3D currentRay; ray.originRadius =3D iradius[i]; ray.lobeSampled =3D lobeSampled; ray.wavelength =3D wavelength ? RtRayGeometry::EncodeWavelength(wav= elength[i]) : 0; // Compute ray spread for the lobe ray.SetRaySpread(lobeSampled, iradius[i], ispread[i], curvature[i],= m_FPdf[sampleIndex]); ray.InitOrigination(&sCtx, Ngn, i); currentRay++; } } int numRays =3D currentRay;
// Let'= s trace the rays int* numShadingCtxs; RixShadingContext const** shadingCtxs; iCtx.GetNearestHits(numRays, m_rays, lobesWanted, false, numShadingCtxs, sh= adingCtxs);
The final pseudo-code for computing light transport is the following:
RixInte= grator::Integrate(numSCtxs, sCtxs) =09ComputeLightTransport(numSCtxs, sCtxs) =09Splat results to display services ComputeLightTransport(numSCtxs, sCtxs) =09For each shading context sCtx: =09=09ComputeDirectLighting(sCtx) =09=09ComputeIndirectLighting(sCtx) ComputeDirectLighting(sCtx) =09InitializeLightingServices() =09GenerateLightSamples()=20 =09EvaluateBxdfSamples() =09Compute MIS weights =09GenerateBxdfSamples() =09EvaluateLightSamples() =09Compute MIS weights ComputeIndirectLighting(sCtx)=09 =09iRays =3D CreateIndirectRays(sCtx) =09(numSCtxs, sCtxs) =3D TraceIndirectRays(iRays) =09ComputeLightTransport(numSCtxs, sCtxs)
Ray differentials determine texture filter sizes and hence texture mipma= p levels (and texture cache pressure in scenes with many textures).
In RIS the goal for ray differential computation was improved efficiency= (over REYES), even if it's not going to give quite as accurate ray differe= ntials in all cases. Auxiliary ray-hit shading points are no longer created= , and we only compute an isotropic ray "spread" - not a full anisotropic se= t of ray differentials. The ray spread expresses how much the ray gets wide= r for every unit of distance it travels.
By default, the spread of camera rays is set up such that the r= adius of a camera ray is 1/4 pixel. The width = of the camera ray is two times its radius, ie. 1/2 pixel. Footprints are co= nstructed at ray hit points such that a camera ray hit footprint is 1/2 pix= el wide. Equivalently, the area of a camera ray footprin= t is 1/4 pixel. (This is true independent of image resolution and perspecti= ve/orthographic projection.)
This choice of default camera ray spread has both a theoretical and a pr= actical foundation. Theory: footprints that are 1/2 pixe= l wide match the Nyquist sampling limit. Practice: our e= xperiments indicate that footprints smaller than 1/2 pixel wide do not shar= pen the final image, but footprints wider than that do soften the final ima= ge. Moving to smaller than 1/2 pixel width is all pain (finer mipmap levels= , more texture cache pressure), no gain (no image quality improvement). Mov= ing to wider than 1/2 pixel is more subjective: some people prefer the shar= p look, some prefer the softer look.
For reflection we compute the reflected ray spread using two approaches:=
Ray spread based on surface curvature. The ray = spread for reflection from a curved smooth surface is simple to compute acc= urately using Igehy's differentiation formula:=
spread'= =3D spread + 2*curvature*PRadius
Ray spread based on roughness (pdf). The ray sp= read from a flat rough surface depends on roughness: the higher the roughne= ss the lower the pdf in a given direction; here we map the pdf to a ray spr= ead using a heuristic mapping:
spread'= =3D c * 1/sqrt(pdf) -- with c =3D 1/8
We set the overall ray spread to the max of these two.
This ray spread computation is done in the SetRaySpread() func= tion (see RixIntegrator.h), which is called from the various RIS integ= rators. Integrator writers can easily make their own version of SetRay= Spread() using other techniques and heuristics and call that from thei= r integrators.
Implementors of RixBxdf=
a>
or other shading plugins may want to query ray properties su=
ch as the ray depth or eye throughput, in order to allow for artistic contr=
ol or optimization. For instance, as an optimization a RixBxdf=
code> may want to skip the evaluation of a particularly expensive=
lobe, if the current ray depth of the hit point is beyond some arbitrary t=
hreshold.
Since it is the integrator that is best suited for tracking such ray pro=
perties, we require that user-authored integrators that would like to parti=
cipate in such ray property queries to override the GetProperty(=
)
routine and provide the necessary information as requested by a false
&nbs=
p;from GetProperty()
, and the caller that is attempting t=
o ask the integrator for the property must recover gracefully by not implem=
enting the optimization.
The definition of enum RayProperty
is in Ri=
xShading.h
, and matches the GetProperty()
call fro=
m RixShadingContext
; in fact, the implementation of RixShadingContext::GetProperty()
simply turns around and =
calls RixIntegrator::GetProperty()
. Implementors should expect=
that some rays may be invalid, as signalled by a rayId
&n=
bsp;value less than zero. It is the caller's responsibility to allocate the=
correct amount of storage (i.e. the implementor of the callback in R=
ixIntegrator
does not need to allocate the memory). The expecte=
d output return values in result
for each value of <=
code>RayProperty are as follows:
k_RayDepth
: result
is expected to =
be an int *
, and should be filled in with the current depth of=
the ray with matching rayId
associated with the current =
IntegrateRays
invocation. The implementor must check for =
rayId
< 0 and return a -1 depth if such a ray ID is en=
countered.k_RayRngSampleCtx
: result
is expected t=
o be RixRNG::SampleCtx*
, and should be filled in with a c=
opy of the appropriate RixRNG::SampleCtx
that ensures decent s=
tratification results for the ray with matching rayId
.k_RayThruput
: result
is expected to=
be RtColorRGB *
, and should be filled in with the c=
urrent thruput to the eye of the ray with matching rayId
&=
nbsp;associated with the current IntegrateRays
invoc=
ation.k_RayVolumeScatterCount
: result is expected to be an int *=
, and should be filled in with the number of times a volume direct light sc=
attering event has occurred during the current IntegrateRays
i=
nvocation for the given ray with matching rayId
.k_RayVolumeSampleCount
: result is expected to be an int *,=
and should be filled in with the number of times a volume sample was taken=
during the current IntegrateRays
invocation for the=
given ray with matching rayId
.