Page tree

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  for (int i = 0; i < numPts; i++)
  {
    P[i] += <some offset <offset vector>;
  }

Typically the offset vector is along the surface normal as in this example:

...

After the displacement has been done, any smooth analytical normals that the original surface may have had (for example a smooth subdivision surface or NURBS patch) are no longer valid – they do not correspond to the displaced surface.  Each  However, each micropolygon has a geometric normal Ngn.  In addition, for some surface types, a smooth shading normal will be automatically computed for each ray hit – this is done by considering the orientation of not only the micropolygon that the ray hit, but also adjacent micropolygons.

...

In order to ray trace a scene efficiently, RenderMan needs to know where the objects are.  Objects are organized into a ray acceleration data structure: a bounding volume hierarchy (BVH) where each node in the hierarchy is a bounding box for the objects below it in the hierarchy.  For displaced surfaces computing these bounding boxes is a bit tricky because we don't know where the surface points actually are until the displacement shader has run.  But we don't want to run the displacement shader on all displaced surfaces before tracing the first rays – if we did, the time-to-first-pixel would suffer.  What we need is a rough indication of how large the the displacement might be, without the expense of running the displacement shader to determine the exact displacement.  Such an indication must be provided with a displacement bound for each displaced object, which ; the displacement bound is an upper bound limit on the displacement on that object.  For example, if we know that the maximum magnitude of displacement on a given object is 0.5 units, then we can specify the displacement bound like this:

...

This bound means that any ray that is farther than 0.5 units away from the undisplaced surface can ignore that surface.  Only when a ray hits this "padded" bounding box (the bounding box of the undisplaced surface points, padded by 0.5 in x, y, and z) do we need to run the displacement shader.  Once the displacement shader has run and all the positions are known, their bounding box is computed and the BVH node bounding box is updated (tightened).  Selecting an appropriate displacement bound is important: if it is too large, the time to first pixel will be slow; if it is too small, the image will have holes (see the figure below).  In most casecases, the displacement bound is just the same as the magnitude parameter of the displacement.  To help select an appropriate displacement bound in harder cases, RenderMan will give a warning (after rendering is completed) if the specified displacement bound was too small or more than ten times too large.

...

The dispstar shader is a very simple shader written in Open Shading Language (OSL).  It computes a displacement amount depending on radial angle in the x-y plane.  When applied to a spherical shape, it produces a round star with five soft spikes.  When applied to a teapot (with higher frequency), it produces a pumpkin-like shape.

...

In this example, the displacement amount computed by dispstar gets passed on to the PxrDisplace displacement shader, which does the actual displacement.  Here is an image of a sphere and the same sphere displaced with this combination of shaders displacement (along with an image with holes rendered with due to too small displacement bound):


Image AddedImage AddedImage Added


Undisplaced TO DO: images of sphere, displaced sphere = in the shape of a fat five-pointed star, and the displaced sphere with holes due to too small displacement bound.

...