This is RenderMan 21 Documentation you are viewing. The Rmanwiki home page will redirect you to the current documentation.
When rendering very complicated scenes, it is often the case that much of the scene is comprised of objects that are identical, or nearly identical. For example, the external facade of a building may be made up of many identical windows or bricks. RenderMan provides a powerful object instancing facility that allows the renderer to take advantage of this repetition by allowing the sharing and reuse of the geometric representation between near-identical copies. In the following image, comprised of over 75,000 subdivision mesh teapots, rendering without instances requires a memory footprint of over 12.5 GB, even with the coarsest possible tessellation settings; rendering with object instances requires a memory footprint of less than 400 MB! This allows you to render more objects without running into limits on your machine.
In the renderer's implementation, a single representation of the geometry is designated as the master. This master geometry representation includes all the relevant geometric details (vertices, faces, primitive variables, etc) as well as all attributes including material (Bxdf) and light bindings. Multiple instances of this master can then be created. Each instance is allowed to have its own transformation, but is otherwise an identical clone from the master, except for some attributes that can be overridden to allow for a limited amount of shading variation. The renderer will reuse most of the memory associated with the master with each instance, although there remains a certain amount of unavoidable overhead associated with each instance in order to store the transformation and the limited number of attributes that can be overridden.
You should set a Dice Instance Strategy on instances to render them.
The default tessellation of instanced objects bears special consideration in RenderMan. By default, RenderMan tessellates geometry using an adaptive technique such that the resulting micropolygons have a length measured in a small number of pixels. For instances, however, this technique is not useful because with this screen projection based technique, no single tessellation can be reused for an object master instanced at different distances from the camera. Instead, the default tessellation is a world distance based measurement: the micropolygon length is measured directly in the units used in the scene, without projecting to the camera. This allows the renderer to reuse tessellations, but does require that the user specify this length directly (the default length is essentially unbounded, meaning that the geometry will be rendered at the coarsest possible representation - for subdivision surfaces, typically the control hull which may not be very detailed). In the pictures below, the teapot on the left is not instanced, while the teapot on the right is instanced with various settings of the world distance tessellation length.
Even though the driving factor behind instancing is to minimize the variability between copies in order to maximize reuse and save memory, it is often still very desirable to be able to shade each variant differently. RenderMan supports shading variation primarily by the following methods:
- by binding different materials (Bxdfs and upstream pattern graphs) to each object instance. This approach is the easiest, but may incur setup overhead - for large numbers of instances, it may be tedious to set different shading parameters per instance. Also, a Bxdf bound to an instance will override all Bxdfs bound to the master, even if the master is comprised of multiple pieces of geometry with multiple different Bxdfs.
- by linking different lights and light filters to the object instance. Light linking on the master can be overridden by specifying different light links on the instances.
- by setting different user attributes per object instance, and driving Bxdfs using these attributes. RenderMan supports the use of completely arbitrary user attributes that may be set differently for every object in the scene. The use of these user attributes, combined with a PxrAttribute pattern node, lends itself well to driving shading variation, particularly if the values of those user attributes can be derived automatically.