Page tree

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 4 Next »


The spooler delivers job scripts into the Tractor job queue for processing and distribution on the farm. Job scripts are accepted as files in alfred job format, and are converted upon receipt by the Tractor engine to an internal job database format (currently JSON files).


tractor-spool [options] [jobfile ...]

tractor-spool [options] -c [/path/]appname appArg1 appArg2 ...

tractor-spool [options] --ribs frm1.rib frm2.rib ...

tractor-spool [options] --rib frm1_prologue.rib frm1.rib...

tractor-spool [options] --jdelete=JOB_ID --user=JOB_OWNER

Where jobfile(s) are job scripts describing the work to be done. Several job file names may be given; they will be submitted to the job queue sequentially.

Note on terminology: Job submission is sometimes referred to as spooling by analogy to print spooling or traditional batch job systems.

--versionshow program's version information, then exit.
-h, --helpprint usage summary, then exit
-vprint verbose status messages
-qquiet mode, print no status
--engine=HOST[:port]hostname[:port] of the master Tractor daemon, default is tractor-engine - which is usually a DNS alias. The default port is 80, and must match the port number on which tractor-engine is actually listening (chosen when starting the engine). If this option is not given, then the spooler will also look for an environment variable named TRACTOR_ENGINE for a host:port specification before using the defaults.
-c, --command

execute the command with the given arguments, specified by collecting all of the remaining command-line parameters, and then creating a single-task Tractor job to send the given command to remote tractor-blade. NOTE this must be the LAST option given. Each of the "words" (shell tokens) following the -c will become individual arguments to the given application. For example:

tractor-spool -c echo hello world

The search paths and other environment settings used to launch the given command are under the control of the tractor-blade server on each host. There are default paths and settings as well as configurable site-defined "environment packages" -- such as all locked down settings for a given show in production -- that can be selected with other options described here, such as --envkey below.

If the command requires a specific type of remote server, add a --service=name specification before -c. Service name abstractions are defined in blade.config, but you an also just use a hostname if you are targeting a specific host, or a profile name if any host from that class will be acceptable.

There are several built-in "portability aliases" for common commands that are often in used in test or diagnostic situations. These aliases start with an equal-sign to distinguish them from specific actual executables. The most useful of which is probably:

tractor-spool --service=somehost -c =printenv
--hname=HOMEBLADEthe origin hostname for this job, used to find the blade that will run 'local' Cmds; default is the hostname from which the job is spooled, but an alternate can be useful if the locally retrieved hostname is not the one that can be resolved by other hosts on the network.
--user=LOGINthe user (login) to be associated with this job; default is the name of the user executing the spooling script
--jobcwd=DIRNAMEblades will attempt to chdir to the specified directory when launching commands from this job; default is simply the current directory at time when tractor-spool is run
--priority=FLOATan arbitrary, positive, floating-point priority for the job(s) being spooled; jobs with higher-valued priorities are processed first
--pausedspool the job in a paused state, meaning that no tasks will be launched from it until its priority is later changed from a negative to a positive number.
--envkey=ENVKEYused with -c and -r to change the environment key used to configure the environment variables applied to these auto-generated job scripts; default: None
--svckey=SVCKEYspecifies an additional job-wide service key restriction for Cmds in the spooled job, the key(s) are ANDed with any keys found on the Cmds themselves. When used with -c or --rib option, it overrides "PixarRender" as the sole service key used to select matching blades for those Cmds.
--title="words"used with -c and -r to change the default title of the auto-generated job
--status-jsoncauses the diagnostic message regarding the spooling outcome to be formatted as a JSON dictionary, rather than the default plain text message.
-r, --ribstreat the jobfile filename arguments as individual RIB files to be rendered as INDEPENDENT prman processes on different remote tractor-blades; a multi-task Tractor job is automatically created to handle the renderings
--ribtreat the jobfile filename arguments as RIB files to be rendered using a SINGLE prman process on a remote tractor-blade; a single-task Tractor job is automatically created to handle the rendering
--nrmcauses auto-generated --rib jobs to use netrender on the local blade rather than direct rendering with prman on a blade; used when the named RIBfile is not accessible from the remote blades directly
--jretire=INTEGERremove the job indicated by the integer job-id from the active job queue, and terminate any running commands

Creates auto-generated jobs in which a template containing the pattern "%d" is expanded to a series of parallel tasks, each refering to a sequential integer in the specified range. For example:

tractor-spool --no-spool --range 1-3 -c cp /src/foo.%d.rib /dst/bar.%d.rib

displays the following job file:

Job -title {cp ...} -subtasks {
  Task -title {cp} -cmds {
     RemoteCmd {{cp} {/src/foo.1.rib} {/dst/bar.1.rib}} -service {pixarRender}
  Task -title {cp} -cmds {
     RemoteCmd {{cp} {/src/foo.2.rib} {/dst/bar.2.rib}} -service {pixarRender}
  Task -title {cp} -cmds {
     RemoteCmd {{cp} {/src/foo.3.rib} {/dst/bar.3.rib}} -service {pixarRender}

A range can be: (a) a single integer (e.g. 5) (b) two integers separated by a hyphen (e.g. 1-5) (c) a comma-separated list of (a) or (b) (e.g. 1-5,10,15,20-30)

--remotecleankey=KEYConverts "local" clean-up Cmds into RemoteCmds. The user may not have control over the the details of job generation in some contexts, so they can't otherwise force clean-ups to be remote. A "local" Cmd is required to run on the same host from which the job was spooled, such as an artist's workstation, meaning that the user would have to keep a tractor-blade process running on their desktop to handle these Cmd clean-ups. In contrast, a RemoteCmd clean-up will run on the first available farm machine. When this option is given, it will also convert Job -whenerror and -whendone blocks into RemoteCmds rather than local Cmds by default. This spooling option also requires a blade service key name to be given, it specifies the appropriate type of blade to run the clean-up. This conversion option is a workaround for cases where the job script generator itself cannot be updated to use RemoteCmd directly in cleanup blocks, and to convert Job -whendone/error blocks into the more general -postscript blocks.