Page tree

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 15 Next »


The spooler delivers job scripts into the Tractor job queue for processing and distribution on the farm. Job scripts are accepted as files in alfred job format, and are converted upon receipt by the Tractor engine to an internal job database format (currently JSON files).


tractor-spool [options] [jobfile ...]

tractor-spool [options] -c [/path/]appname appArg1 appArg2 ...

tractor-spool [options] --ribs frm1.rib frm2.rib ...

tractor-spool [options] --rib frm1_prologue.rib frm1.rib...

tractor-spool [options] --jdelete=JOB_ID --user=JOB_OWNER

Where jobfile(s) are job scripts describing the work to be done. Several job file names may be given; they will be submitted to the job queue sequentially.

Note on terminology: Job submission is sometimes referred to as spooling by analogy to print spooling or traditional batch job systems.

--versionshow program's version information, then exit.
-h, --helpprint usage summary, then exit
-vadd verbose status messages

quiet mode, print no status

--engine=HOST[:port]hostname[:port] of the central Tractor Engine service, default is tractor-engine:80. The hostname "tractor-engine" is usually a DNS alias added by local network administrators, pointing to whichever real hostname is running the service currently. The alias simplifies all connections to the tractor-engine process because everything defaults to trying that name. The default port is 80, and must match the port number on which tractor-engine is actually listening (chosen when starting tractor-engine). If this option is not given, then the spooler will first look for an environment variable named TRACTOR_ENGINE for a host:port specification before using the defaults.



execute the command with the given arguments, specified by collecting all of the remaining command-line parameters, and then creating a single-task Tractor job to send the given command to remote tractor-blade. NOTE this must be the LAST option given. Each of the "words" (shell tokens) following the -c will become individual arguments to the given application. For example:

tractor-spool -c echo hello world

The search paths and other environment settings used to launch the given command are under the control of the tractor-blade server on each host. There are default paths and settings as well as configurable site-defined "environment packages" -- such as all locked down settings for a given show in production -- that can be selected with other options described here, such as --envkey below.

If the command requires a specific type of remote server, add a --service=name specification before -c. Service name abstractions are defined in blade.config, but you an also just use a hostname if you are targeting a specific host, or a profile name if any host from that class will be acceptable.

There are several built-in "portability aliases" for common commands that are often in used in test or diagnostic situations. These aliases start with an equal-sign to distinguish them from specific actual executables. The most useful of which is probably:

tractor-spool --service=somehost -c =printenv

Use -c for the usual RemoteCmd format, and -C to force a local Cmd.


for use with -c, adds minimum and maximum bounds on the elapsed time of the command on a blade. Give a range of seconds, as 'min,max' or 'min-max'. The launched command is marked Error if its elapsed run time is shorter or longer than the given bounds.
Commands that exceed the maximum time are killed. If only one value is given, it specifies the max run time. A max time of 0 (zero) means unbounded.

 --haddr=HADDR address of remote spooling client
--user=LOGINthe user (login) to be associated with this job; default is the name of the user executing the spooling script
--jobcwd=DIRNAMEblades will attempt to chdir to the specified directory when launching commands from this job; default is simply the current directory at time when tractor-spool is run
--priority=FLOATan arbitrary, positive, floating-point priority for the job(s) being spooled; jobs with higher-valued priorities are processed first
--projects=PROJECTSlist of project affiliations, like 'TheFilm lighting'
--tier=TIERdispatching tier assignment, for special-case jobs
--pausedspool the job in a paused state, meaning that no tasks will be launched from it until its priority is later changed from a negative to a positive number.
--aftertime='MM DD HH:MM'delay job start until the given date, as 'MM DD HH:MM'

delay job start until the given job(s) complete, specified as jid(s)

--envkey=ENVKEYused with -c and -r to change the environment key used to configure the environment variables applied to these auto-generated job scripts; default: None
--svckey=SVCKEYspecifies an additional job-wide service key restriction for Cmds in the spooled job, the key(s) are ANDed with any keys found on the Cmds themselves. When used with -c or --rib option, it overrides "PixarRender" as the sole service key used to select matching blades for those Cmds.

used with -c or --rib option to specify the sole service key expression for each command; subject to substitution by RANGE and ITER

--remotecleankey=KEYConverts "local" clean-up Cmds into RemoteCmds. The user may not have control over the the details of job generation in some contexts, so they can't otherwise force clean-ups to be remote. A "local" Cmd is required to run on the same host from which the job was spooled, such as an artist's workstation, meaning that the user would have to keep a tractor-blade process running on their desktop to handle these Cmd clean-ups. In contrast, a RemoteCmd clean-up will run on the first available farm machine. When this option is given, it will also convert Job -whenerror and -whendone blocks into RemoteCmds rather than local Cmds by default. This spooling option also requires a blade service key name to be given, it specifies the appropriate type of blade to run the clean-up. This conversion option is a workaround for cases where the job script generator itself cannot be updated to use RemoteCmd directly in cleanup blocks, and to convert Job -whendone/error blocks into the more general -postscript blocks.

blades will attempt to chdir to the specified directory when launching commands from this job; default is simply the cwd at time when tractor-spool is run

--title="words"used with -c and -r to change the default title of the auto-generated job
--task-title=TTITLEtask title

create one task for each item in the file, which specifies a line-separated list of items


create one task for each item in the list, specified by a comma-separated list of items, replacing ITER in the command string or serivce key expression with the current value; e.g. red,green,blue or 'red house,green lawn,blue sky'


block until job is fully spooled (to tractor-engine 2.0+)


prints the submission confirmation message (or denial) as a JSON-format dict on stdout, rather than plain text


prints spool confirmation message as human-readable plain text; this is the default

-r, --ribstreat the jobfile filename arguments as individual RIB files to be rendered as INDEPENDENT prman processes on different remote tractor-blades; a multi-task Tractor job is automatically created to handle the renderings
-J, --in-json

indicates that the job file(s) being submitted is formatted as Tractor compliant JSON

-A, --in-alfred

indicates that the job file being submtted is in Alfred (tcl) format, the default


process job files assuming that they expect backward-compatible alfred-style two-level unquoting / substitution


apply second substitution pass on Cmd executable parameters only

-R, --ribtreat the jobfile filename arguments as RIB files to be rendered using a SINGLE prman process on a remote tractor-blade, that is prman will concatenate all of the rib files for rendering; a single-task Tractor job is automatically created to handle the rendering
--nrmcauses auto-generated --ribs to use netrender on the local blade rather than direct rendering with prman on a blade; used when the named RIBfile is not accessible from the remote blades directly
--jretire=INTEGERremove the job indicated by the integer job-id from the active job queue, and terminate any running commands

Creates auto-generated jobs in which a template containing the pattern "%d" is expanded to a series of parallel tasks, each refering to a sequential integer in the specified range. For example:

tractor-spool --no-spool --range 1-3 -c cp /src/foo.%d.rib /dst/bar.%d.rib

displays the following job file:

Job -title {cp ...} -subtasks {
  Task -title {cp} -cmds {
     RemoteCmd {{cp} {/src/foo.1.rib} {/dst/bar.1.rib}} -service {pixarRender}
  Task -title {cp} -cmds {
     RemoteCmd {{cp} {/src/foo.2.rib} {/dst/bar.2.rib}} -service {pixarRender}
  Task -title {cp} -cmds {
     RemoteCmd {{cp} {/src/foo.3.rib} {/dst/bar.3.rib}} -service {pixarRender}

A range can be: (a) a single integer (e.g. 5) (b) two integers separated by a hyphen (e.g. 1-5) (c) a comma-separated list of (a) or (b) (e.g. 1-5,10,15,20-30)


specifies the tags to be added to each command; subject to substitution by RANGE and ITER


limit the maximum number of concurrently active commands of job

delete the requested job from the active queue
--user=JOBOWNERalternate job owner, default is the user name of the person running tractor-spool


Tractor login password for the user submitting the job (if engine passwords are enabled)

--configfile=CONFIGFILEfile containing cached login and password data for the user running tractor-spool

parse the inbound job text and report errors, the job is not submitted to the engine for processing


print the job alfscript rather than spooling it; usually to view or save the job text that tractor-spool itself has generated based on other arguments, rather than when it is reading an existing job from a file.