First: RenderMan operates using an adaptive scheme when sampling a scene.
Variance, or noise, receives more samples than surrounding areas with less variance.
This means that your primary control for rendering a clean image is the Pixel Variance Parameter. Lower values improve quality by telling the renderer you want less noise. When RenderMan detects noise it sends more samples. Without lowering the Pixel Variance, RenderMan doesn't know the user is unhappy with the noise levels. Altering max samples and re-rendering may produce the exact same image and raising min samples defeats the purpose of having an adaptive rendering technique.
Pixel Variance Values down to 0.001 are very high quality. You may test what value is acceptable by doing a render region on areas that have the most noise. Lower the value until you're happy. Lower values take longer to render.
This is a global control for sampling the image. This is simple and handles all noise given enough time and a pool of samples to use.
What about min/max samples?
Min and max sample values are essentially clamps or boundaries given to RenderMan. Min samples is exactly that: the minimum number to take for every pixel in the image. Max samples is the maximum to take per pixel. These have no effect on how many samples RenderMan may take based on the Pixel Variance Parameter, instead these enforce a minimum and maximum.
Typically we recommend keeping min samples as low as possible. You may increase them on scenes with a lot of tiny features or geometry like fur and hair. This guarantees a certain level of quality before the adaptive sampling takes over.
You may use some high values for max samples but note that if the Pixel Variance resolves an area before reaching this limit, it will not take any more even if you specified a high max samples value.
When do you increase max samples?
Well, scenes with motion blur or depth of field may need many more samples to actually converge. If your max samples is too low, you end the pixel sampling before it had a chance to converge or finish. For example: If a pixel needs 200 samples to converge but you set max samples to 100, then no matter what you do to Pixel Variance, it won't help as it is forced to stop at 100 samples.
Typically 512 Max Samples is useful in most cases with depth of field and/or motion blur.
Note that sampling is also tied to filtering which is what provides the final anti-aliasing. We recommend the default of Gaussian 2 2 and don't typically see need to change this. It's an inexpensive filter and with a window of 2 2 it retains sharpness. Sinc filters enhance edges and may cause ringing and artifacts. Essentially they create details where there are none. I would avoid these unless you need a large image for print. Be careful of highlights when using those filters.
Experience will help you determine your needs over time and I recommend experimenting with settings on isolated sections of your images. This may help you save time. Also know that when seen in playback, noise isn't always noticeable at lower levels. Avoid over-tuning a scene by using a still image as your only guide. No one is going to pause your film in the theater. :-) Note that visual effects shots may also rely on adding in film-like grain and destroy your hard work at making a perfectly clean image at huge costs to render time. Noise caused by motion blur is typically most forgiving because it's transient over time and space.
The above recommendations are on a global level for controlling your image quality all at once. I don't talk about local sampling here (lights, BxDF, volume, and indirect). These settings may provide more granular control and will be handled later. Do keep Light and BxDF samples the same. Only change them for debugging purposes when writing and testing BxDFs,