Page tree

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 5 Next »

Contents

Displacement shaders: moving surface points (and normals)

After a surface has been tessellated, its surface points can be moved by a displacement shader.  The displacement shader is executed on each vertex of the tessellated surface.  In general, each point can be moved by an arbitrary direction and amount:

  for (int i = 0; i < numPts; i++)
  {
    P[i] += <some offset vector>;
  }

Typically the offset vector is along the surface normal as in this example:

  for (int i = 0; i < numPts; i++)
  {
    P[i] += amplitude[i] * Nn[i];
  }

Figure ... shows a small grid of surface points before and after displacement.

TO DO: figure of 5x4 pt grid similar to figure 8.3 of ARM book.  But with points P and normals Nn shown(?).

After the displacement has been done, any smooth analytical normals that the original surface may have had (for example a smooth subdivision surface or NURBS patch) are no longer valid – they do not correspond to the displaced surface.  Each micropolygon has a geometric normal Ngn.  In addition, for some surface types a smooth shading normal will be automatically computed for each ray hit – this is done by considering the orientation of not only the micropolygon that the ray hit, but also the adjacent micropolygons.

Displaced polygon meshes deserve a special mention here.  Polygon meshes do not by themselves have smooth normals, but are often defined with a normal at each polygon mesh vertex point.  When a ray hits a face of an undisplaced polygon mesh, the shading normal Nn can be computed by interpolating the normals of the face vertices, leading to a smooth appearance.  But for a ray hit on a displaced polygon mesh, those vertex normals are no longer valid after the displacement.  A classic and very useful trick is to add back the difference between ... at ray hits on the displaced polygon mesh.  WAIT A MINUTE – this trick is for bump mapping, not displacement, right??

The PxrBump shading node and RixBump() function both employ this trick.

Displacement bounds

BVH bbox tightening.  Not too large: important for ttfp.  Not too small: holes.  Balance.  Will give warning after (rendering completed) if too small or more than 10 times too large.

TO DO: Figure with holes due to too small displacement bound.

Example

Here is an example of a fragment of a rib file with a sphere being turned into a five-pointed star:

  Attribute "displacementbound" "float sphere" [0.21]
  Pattern "dispstar" "sin5theta" "float scale" 0.2
  Displace "PxrDisplace" "pxrdisp" "reference float dispScalar" "sin5theta:resultF"

  Sphere 1 -1 1 360

The dispstar shader is a very simple shader written in Open Shading Language (OSL).  It computes a displacement amount depending on radial angle in the x-y plane.  When applied to a spherical shape, it produces a round star with five spikes.  When applied to a teapot (with higher frequency), it produces a pumpkin-like shape.

shader
dispstar(float scale = 1.0, float freq = 5.0,
         output float resultF = 0.0)
{
  vector Non;   // original, undisplaced normal
  getattribute("builtin", "Non", Non);

  // Convert Non from current (world) space to object space
  vector Nobj = transform("object", Non);

  // Compute displacement amount
  float angle = atan2(Nobj[1], Nobj[0]);
  float disp = scale * sin(freq*angle);

  resultF = disp;
}

In this example, the displacement amount computed by dispstar gets passed on to the PxrDisplace displacement shader, which does the actual displacement.  Here is an image of a sphere and the same sphere displaced with this combination of shaders:

TO DO: image of displaced sphere = star.

The seasoned RenderMan user will notice this is all very similar to how displacement was done in the classic RenderMan Shading Language (RSL) – see for example Apodaca and Gritz, 2000 section 8.2.