GENIE:GENIEToolboxTutorial

Andrew Price, University of Southampton, ([mailto:a.r.price@soton.ac.uk a.r.price@soton.ac.uk])

GENIE Toolbox Tutorial
The GENIE Toolbox has been designed to accommodate a wide range of computational resource on the Grid. Interfaces are provided for resources managed by the Globus Toolkit (v2.4), Condor (native, SSH, CondorWS) and Microsoft Compute Cluster. GENIE models can be submitted to systems running various operating systems including Linux/UNIX, Windows and MAC OSX. In such a heterogeneous environment it cannot be assumed that a compiler can be configured and used on a remote compute node. It is therefore necessary to build the GENIE model offline for any target system you intend to use before exploiting the Toolbox to use the computational Grid.

The GENIE Toolbox supports release rel-2-1-0 of the GENIE code as tagged in the CVS repository. This tutorial will demonstrate how to execute and manage this release of the GENIE framework on appropriate resource. We strongly recommend that you provide your own build(s) for production studies as described in the next section.

Preparing GENIE Model Archive
The GENIE toolbox provides a management and coordination layer for model binaries and their output data. The system does not, at the present time, interface to the SVN code repository and does not provide a compilation environment. It is therefore assumed that the user will provide a file archive (tar.gz or zip format) containing the model binary to be studied and any static input data files that this binary requires. To prepare such an archive:


 * Export the version of the code that you wish to study from the GENIE SVN repository
 * We strongly recommend tagging the version of the code to be studied and checking out against this tag
 * Compile and build the model binary for each target platform you intend to use on the Grid
 * At present, this is most easily achieved by invoking the genie_example.job script with the required changes to makefile.arc and using the appropriate config file for the study
 * Instructions for building GENIE on UNIX/Linux and Win32 platforms are available in the genie-main module directory
 * [win32] Copy the netcdf.dll file to the directory containing genie.exe
 * Archive the directory hierarchy containing the genie.exe binary and the static data input files
 * The GENIE Toolbox will assume that linux / unix archives are in the tar.gz format and that win32 archives are zip files.

Some reference builds for release rel-2-1-0 are available:


 * genie_ig_fi_fi_archive.zip
 * genie_ig_fi_fi_archive.tar.gz
 * genie_eb_go_gs_archive.zip
 * genie_eb_go_gs_archive.tar.gz
 * genie_ig_go_sl_archive.zip
 * genie_ig_go_sl_archive.tar.gz

Functionality is provided for packaging a model executable from the Matlab environment. Some reference implementations of archival functions are available in the GENIEToolbox/configs directory. For each reference config file we provide a function that will package the binary for that instance with the appropriate data input files that the model requires. If you wish to package the model using such a function you need to perform the following steps:


 * Check out or export the tagged release of the code to be studied
 * Edit user.mak (formally makefile.arc) and user.sh (formally genie_example.job</tt>)
 * We need the code directory and output directory to have a common root so that they can be cleanly packaged
 * Set GENIE_ROOT</tt> and CODEDIR</tt> to point to the location of the root of the source checkout. E.g. the directory containing genie-main</tt> and the other modules
 * Set OUT_DIR</tt> and OUTROOT</tt> to set the location of the output directory as $(GENIE_ROOT)/genie_output</tt> and $(CODEDIR)/genie_output</tt> respectively. For convenience, please call the output directory 'genie_output</tt>' and place it next to the other modules. If you desire a different structure you will need to modify the specification of genie_output</tt> in write_execute_script and modify the configuration metadata to use the appropriate relative paths
 * Build and execute the model locally using genie_example.job</tt> and the appropriate .config</tt> file
 * Archive the built model using the genie_{model}_archive</tt> function
 * >> archiveFile = genie_{model}_archive('~/genie')</tt>
 * See How do I create a model archiving function for the GENIE Toolbox? for instructions on creating an archival function for a new model

Example
For a standard Linux build the following steps would create a suitable archive: <ol> <li>Checkout / Export the GENIE code from CVS using a release tag if applicable</li> > cvs export –r rel-2-1-0 core <li>Make any changes to makefile.arc</tt> and/or genie_example.job</tt> as appropriate for your local environment</li> <li>Comment out the execution of the <tt>genie.exe</tt> binary from the <tt>genie_example.job</tt> script</li> echo 'STARTING EXPERIMENT:' date echo 'ENDING EXPERIMENT:' <li>Run the <tt>genie_example.job</tt> script using a config file if appropriate for your build</li> > cd genie-main > ./genie_example.job –f configs/genie_ig_go_sl.config <li>Create an archive containing the output directory structure, the GENIE binary and the input data files. E.g.</li> > cd .. > tar zcvf genie_ig_go_sl_runtime.tar.gz \ genie_output \ genie-main/data/input \ genie-main/inputdata \ genie-igcm3/data/input \ genie-goldstein/data/input \ genie-slabseaice/data/input \ genie-fixedchem/data/input \ genie-fixedicesheet/data/input </ol> The input data directories contain a lot of files that are only applicable to a particular build of the model. We recommend only adding files that your build requires to keep the archive size to a minimum.
 * 1) time ./genie.exe || ABORT EXECUTE

Job Preparation
To execute a GENIE model using the toolbox three descriptive data structures must be created in the Matlab workspace. These variables provide a comprehensive description of the specific GENIE model configuration to execute, a local runtime environment in which model instances can be prepared for execution and a computational resource on which the simulation will be performed.

Configuration
Create a description of a specific instance of the GENIE model. At the Matlab command prompt enter:


 * <tt>>> configuration = genie_ig_go_sl_gaalbedofluxcorr1_config</tt>

<ol> configuration =

genie_main: [1x1 struct] genie_igcm3: [1x1 struct] genie_goldstein: [1x1 struct] genie_fixedchem: [1x1 struct] genie_fixedicesheet: [1x1 struct] genie_slabseaice: [1x1 struct] </ol>

This executes the <tt>genie_ig_go_sl_gaalbedofluxcorr1</tt> function which is a direct port of the <tt>genie_ig_go_sl_gaalbedofluxcorr1.config</tt> file from the GENIE CVS repository. The function has loaded a complete set of parameters for the GENIE-2 model comprising the IGCM atmosphere, the GOLDSTEIN ocean, slab sea-ice, fixed chemistry and fixed ice sheet modules. The <tt>genie_main</tt> field contains the parameters controlling the execution of the whole model. For the purposes of the demonstration we will reduce the total number of timesteps in the configuration so that the simulation lasts for a single month.


 * <tt>>> configuration.genie_main.Parameter.GENIE_CONTROL_NML.koverall_total = 720;</tt>

Runtime
A local runtime data structure is created to provide details about the locations of the model binary and a directory in which new model invocations can be managed. For the purposes of this demonstration we will initially execute the model on the local machine. The local runtime needs to provide the appropriate binary for the OS on which Matlab is running. The runtime for the demonstration binary is specified as follows:


 * Windows (Win32)
 * <tt>>> runtime.RuntimeArchive=fullfile('../demo/runtime','genie_ig_go_sl_gaalbedofluxcorr1_archive.zip');</tt>
 * <tt>>> runtime.RuntimeArchiveTool=fullfile('../demo/runtime','unzip.exe');</tt>
 * <tt>>> runtime.LocalRunDir='..\demo\runtime'</tt>
 * <tt>>> runtime.EXPID='genie_ig_go_sl_gaalbedofluxcorr1';</tt>

<ol> runtime = RuntimeArchive: '..\demo\runtime\genie_ig_go_sl_gaalbedofluxcorr1_archive.zip' RuntimeArchiveTool: '..\demo\runtime\unzip.exe' LocalRunDir: '..\demo\runtime' EXPID: 'genie_ig_go_sl_gaalbedofluxcorr1' </ol>


 * Linux / UNIX / Mac OSX
 * <tt>>> runtime.RuntimeArchive=fullfile('../demo/runtime','genie_ig_go_sl_gaalbedofluxcorr1_archive.tar.gz');</tt>
 * <tt>>> runtime.LocalRunDir='../demo/runtime'</tt>
 * <tt>>> runtime.EXPID='genie_ig_go_sl_gaalbedofluxcorr1';</tt>

<ol> runtime = RuntimeArchive: '../demo/runtime/genie_ig_go_sl_gaalbedofluxcorr1_archive.tar.gz' LocalRunDir: '../demo/runtime' EXPID: 'genie_ig_go_sl_gaalbedofluxcorr1' </ol>

Resource
The final data structure describes the computational resource on which the model will run. For this demonstration the model will be executed on the local machine. A utility script is provided for configuring the resource data structure:


 * Windows (Win32)
 * <tt>>> resource = createResource</tt>
 * [[image:resourcetype.png|Type of resource]]
 * Select 'local'
 * [[image:resourceos.png|Operating System of the resource]]
 * Select the operating system of the machine you are running Matlab on
 * <tt>Please provide a short meaningful name for the resource:</tt>
 * Type: local machine
 * <tt>Please specify the maximum number of jobs that may be submitted to this resource [10]: >></tt>
 * Type: 1
 * <tt>Upload this resource to the database? Y/N [N]:</tt>
 * Select N

<ol> resource =

type: 'local' name: 'local machine' MaxJobs: 1 broker: 'fork' RemoteTargetOS: 'win32' RemoteFileSep: '\' </ol>


 * Linux / UNIX / Mac OSX
 * <tt>>> resource = createResource</tt>
 * [[image:resourcetype.png|Type of resource]]
 * Select 'local'
 * [[image:resourceos.png|Operating System of the resource]]
 * Select the operating system of the machine you are running Matlab on
 * <tt>Please provide a short meaningful name for the resource:</tt>
 * Type: local machine
 * <tt>Please specify the maximum number of jobs that may be submitted to this resource [10]: >></tt>
 * Type: 1
 * <tt>Upload this resource to the database? Y/N [N]:</tt>
 * Select N

<ol> resource =

type: 'local' name: 'local machine' MaxJobs: 1 broker: 'fork' RemoteTargetOS: 'linux' RemoteFileSep: '/' </ol>

Restarts
To restart a model from previous output a further data structure is required. The <tt>restart</tt> structure simply specifies the locations of any additional files required to initialise the model. The files may be specified as locations in the local file system or with unique identifiers from the GENIE database.

Example
The files required to restart an instance of the <tt>genie_ig_go_sl_gaalbedofluxcorr1</tt> after one month of simulation are:
 * igcmlandsurf_restart_2000_01_30.nc
 * igcmoceansurf_restart_2000_01_30.nc
 * igcm_rs_2000_01.nc
 * igcm_rg_2000_01.nc
 * goldstein_restart_2000_01_30.nc
 * slabseaice_restart_2000_01_30.nc

If the files reside in the local filesystem they are specified in the workspace as follows:
 * <tt>restart{1}.localRestartFile='./igcmlandsurf_restart_2000_01_30.nc';</tt>
 * <tt>restart{2}.localRestartFile='./igcmoceansurf_restart_2000_01_30.nc';</tt>
 * <tt>restart{3}.localRestartFile='./igcm_rs_2000_01.nc';</tt>
 * <tt>restart{4}.localRestartFile='./igcm_rg_2000_01.nc';</tt>
 * <tt>restart{5}.localRestartFile='./goldstein_restart_2000_01_30.nc';</tt>
 * <tt>restart{6}.localRestartFile='./slabseaice_restart_2000_01_30.nc';</tt>

If the files reside in the database they are specified in the workspace as follows:
 * <tt>restart{1}.standard.ID='igcmlandsurf_restart_2000_01_30_nc_...';</tt>
 * <tt>restart{2}.standard.ID='igcmoceansurf_restart_2000_01_30_nc_...';</tt>
 * <tt>restart{3}.standard.ID='igcm_rs_2000_01_nc_...';</tt>
 * <tt>restart{4}.standard.ID='igcm_rg_2000_01_nc_...';</tt>
 * <tt>restart{5}.standard.ID='goldstein_restart_2000_01_30_nc_...';</tt>
 * <tt>restart{6}.standard.ID='slabseaice_restart_2000_01_30_nc_...';</tt>

If the local names of the files have been obtained as part of the query on the database then this information can be supplied. Providing this information helps the system as a further query does not have to be perfomed to find this information
 * <tt>restart{1}.standard.ID='igcmlandsurf_restart_2000_01_30_nc_...';</tt>
 * <tt>restart{1}.standard.localName='igcmlandsurf_restart_2000_01_30.nc';</tt>
 * <tt>restart{2}.standard.ID='igcmoceansurf_restart_2000_01_30_nc_...';</tt>
 * <tt>restart{2}.standard.localName='igcmoceansurf_restart_2000_01_30.nc';</tt>
 * <tt>restart{3}.standard.ID='igcm_rs_2000_01_nc_...';</tt>
 * <tt>restart{3}.standard.localName='igcm_rs_2000_01.nc';</tt>
 * <tt>restart{4}.standard.ID='igcm_rg_2000_01_nc_...';</tt>
 * <tt>restart{4}.standard.localName='igcm_rg_2000_01.nc';</tt>
 * <tt>restart{5}.standard.ID='goldstein_restart_2000_01_30_nc_...';</tt>
 * <tt>restart{5}.standard.localName='goldstein_restart_2000_01_30.nc';</tt>
 * <tt>restart{6}.standard.ID='slabseaice_restart_2000_01_30_nc_...';</tt>
 * <tt>restart{6}.standard.localName='slabseaice_restart_2000_01_30.nc';</tt>

Job Submission
Job submission is achieved through a single call to the gc_jobsubmit function. This function takes as input the three data structures described above that specify the model instance (<tt>metadata</tt>), the local runtime environment (<tt>runtime</tt>) and the compute resource (<tt>resource</tt>). The function returns a unique job handle (<tt>handle</tt>) that can be used to monitor the progress of the simulation and a retrieval data structure (<tt>retrieve</tt>) that retains the information necessary to recover the output data once the simulation is complete.

Job Submission: Local Machine
The three data structures are now defined and the GENIE model can be executed. This achieved through a single call to the gc_jobsubmit function:


 * <tt>>> [handle, retrieve]=gc_jobsubmit(configuration, runtime, resource)</tt>

<ol> Welcome to GENIE, initialisation starting ******************************************************* =======================================================   Initialisation of GENIE main module complete =======================================================  fixedicesheet: Opening orog file ../../genie-fixedicesheet/data/input/orog_grid_std_t21.nc

...

=
==========================================  Initialising GOLDSTEIN module shutdown ======================================================= GOLD : weighted r.m.s. model-data error    1.44769543374438 GOLD : volm transport weighted temperatures j=26    and opsia -1.053364926767422E-003 1.074417049012253E-003  2.107798060746225E-004 max poleward heat flux  7.257640610796259E-004 overturning extrema in Sv ominp,omaxp,omina,omaxa,avn -0.22954E+02   0.25604E+02   -0.79754E+01    0.20454E+01    0.15206E+00 =======================================================  GOLDSTEIN module shutdown complete ======================================================= *******************************************************   Shutdown complete, au revoir *******************************************************

handle = 192.168.0.1@@C:\demo\runtime\20060814T170527_950129

retrieve =

runtime: {'run_condor_win32.bat' 'genie_ig_go_sl_runtime.zip'  'unzip.exe'} handle: '192.168.0.1@@C:\demo\runtime\20060814T170527_950129' LocalRunDirUniq: 'C:\demo\runtime\20060814T170527_950129\' resource: [1x1 struct] configuration: {'fort.8' 'fort.7'  'goin_GOLD'  'fort.14'  'fort.13'  'fort.12'} </ol>

The model should execute on the local machine and display the stdout in the Matlab command window.

Job Submission: Remote Globus System
The execution of the same model instance on a remote resource can be achieved by specifying a new resource data structure. The easiest way to create a new resource structure is to run the createResource function which enables a user to specify any supported resource that is available to them. Since users of the GENIE toolbox should have access to the UK National Grid Service we now demonstrate how to submit the above compute job to the Oxford compute node of the NGS.

To exploit a computational resource that provides a Globus Toolkit v2.4 interface (specifically GRAM and GridFTP) a user must instantiate a X.509 proxy certificate (see Grid Certificate Management). This is achieved by invoking the gd_createproxy function.


 * <tt>>> gd_createproxy</tt>

The client will open a dialog window and request the password for your certificate.


 * [[image:gd_createproxy.png|gd_createproxy]]

Enter your password and press Create.


 * [[image:gd_createproxy-created.png|Enter password for your e-Science certificate]]

Upon successful creation of the proxy certificate click <tt>OK</tt>. Click <tt>Cancel</tt> on the dialog window once the proxy has been created and press a key in the paused Matlab session.

As the core nodes of the National Grid Service run RedHat Linux we need to make sure that the local runtime points at the archive of the Linux binary. Create or edit the runtime metadata structure:

<ol> runtime = RuntimeArchive: '../demo/runtime/genie_ig_go_sl_gaalbedofluxcorr1_archive.tar.gz' LocalRunDir: '../demo/runtime' EXPID: 'genie_ig_go_sl_gaalbedofluxcorr1' </ol>

In order to use the Oxford compute node of the UK National Grid Service you simply need to load the appropriate description metadata. The <tt>createResource</tt> function maintains a set of common resource descriptions for platforms that GENIE users are likely to use on a regular basis. To load the NGS Oxford resource description


 * <tt>>> resource = createResource('NGSOxford')</tt>

<ol> resource =

type: 'globus' name: 'NGS Oxford' MaxJobs: 16 broker: 'PBS' RemoteTargetOS: 'linux' RemoteHost: 'grid-compute.oesc.ox.ac.uk' RemoteRunDir: '/home/ngs0000/' RemoteFileSep: '/' RemoteJobManager: 'jobmanager-pbs' RemoteMaxWallTime: 2880 jarutil: '/usr/bin/jar' JobsPerNode: 1 </ol>

Again, the job is submitted through the same call to the gc_jobsubmit function


 * <tt>>> [handle, retrieve] = gc_jobsubmit(configuration, runtime, resource)</tt>

The system will then prepare the model instance for execution in a local directory. Once prepared, the job files will be transferred to a unique directory on the remote compute platform. The job will be submitted for execution to the job manager on the remote system. A job handle and retrieval data structure will be returned - this can take a little time to complete (~30 seconds is not unreasonable - anything over three minutes would indicate firewall issues).

<ol> handle =

https://grid-compute.oesc.ox.ac.uk:64002/29267/1181558673/

retrieve =

RemoteRunDirUniq: '/home/ngs0000/20070611T114343_950129/' runtime: {'run_linux.sh' 'genie_ig_go_sl_gaalbedofluxcorr1_archive.tar.gz'} handle: 'https://grid-compute.oesc.ox.ac.uk:64002/29267/1181558673/' LocalRunDirUniq: 'C:\demo\runtime\20070611T114343_950129\\' resource: [1x1 struct] configuration: {'fort.8' 'fort.7'  'goin_GOLD'  'fort.14'  'fort.13'  'fort.12'} </ol>

You might like to attempt a second submission to a different resource. A list of pre-defined resource descriptions is available from the <tt>createResource</tt> function


 * <tt>>> resource = createResource('list')</tt>

<ol> Defined resources: 'local' 'condorlinux' 'condorwin32' 'Pacifica2' 'Cluster1' 'NGSOxford' 'NGSLeeds' 'NGSRAL' 'NGSManchester' 'NGSSoton'

resource =

[] </ol>

Pick a second resource to which you have access - another node of the NGS would be a good selection


 * <tt>>> resource = createResource('NGSManchester')</tt>

<ol> resource =

type: 'globus' name: 'NGS Manchester' MaxJobs: 8 broker: 'PBS' RemoteTargetOS: 'linux' RemoteHost: 'grid-data.man.ac.uk' RemoteRunDir: '/home/ngs0000/' RemoteFileSep: '/' RemoteJobManager: 'jobmanager-pbs' RemoteMaxWallTime: 2880 jarutil: '/usr/bin/jar' JobsPerNode: 1 </ol>

A second job can be submitted to run concurrently with the first. In order to keep a record of both jobs it is advisable to collect the details of the second job in new variables


 * <tt>>> [handle1, retrieve1] = gc_jobsubmit(configuration, runtime, resource)</tt>

<ol> handle1 =

https://grid-data.man.ac.uk:64011/20181/1181560582/

retrieve1 =

RemoteRunDirUniq: '/home/ngs0000/20070611T121335_231138/' runtime: {'run_linux.sh' 'genie_ig_go_sl_gaalbedofluxcorr1_archive.tar.gz'} handle: 'https://grid-data.man.ac.uk:64011/20181/1181560582/' LocalRunDirUniq: 'C:\demo\runtime\20070611T121335_231138\\' resource: [1x1 struct] configuration: {'fort.8' 'fort.7'  'goin_GOLD'  'fort.14'  'fort.13'  'fort.12'} </ol>

Job Polling
The progress of remote compute jobs can be monitored using the <tt>gc_jobstatus</tt> function. For a Globus job handle the Geodise function <tt>gd_jobstatus</tt> can also be used.


 * <tt>>> status = gc_jobstatus(handle)</tt>

<ol> status =

3 </ol>


 * <tt>>> status = gd_jobstatus(handle)</tt>

<ol> status =

3 </ol>

For jobs submitted to Condor resources the gc_jobsubmit function should be used. This function needs a further parameter to poll the status of the job. A <tt>testFile</tt> must be specified - the existance of which will indicate a successful completion of the compute job.


 * <tt>>> status = gc_jobstatus(condorHandle, 'successful_output.txt')</tt>

<ol> status =

3 </ol>

The status codes returned from the function have the following meanings:

<ol> -1 is UNKNOWN 1 is PENDING 2 is ACTIVE 3 is DONE 4 is FAILED 5 is SUSPENDED 6 is UNSUBMITTED </ol>

Job Retrieval
Each compute job is prepared on the local system in a unique directory specified in the retrieval data structure in the field <tt>retrieve.LocalRunDirUniq</tt>. The contents of the directory for the jobs above is:


 * <tt>>> dir(retrieve.LocalRunDirUniq)</tt>

<ol> .                                               ..                                               fort.12 fort.13 fort.14 fort.7 fort.8 genie_ig_go_sl_gaalbedofluxcorr1_archive.tar.gz goin_GOLD run_linux.sh </ol>

Once the jobs have completed the <tt>gc_jobstatus</tt> function will return a status code of <tt>3</tt>


 * <tt>>> status = gc_jobstatus(handle)</tt>

<ol> status =

3 </ol>

The data files generated by the simulation on the remote resource are retrieved using the <tt>gc_jobretrieve</tt> function


 * <tt>>> [success, resultsFiles] = gc_jobretrieve(retrieve)

<ol> Creating an archive of the output data Retrieving results file archive Unzipping archive... Extracting: fixedchem_restart_1999_12_30.nc Extracting: fixedicesheet_data_1999_12_30.nc Extracting: fixedicesheet_restart_1999_12_30.nc Extracting: igcm_hs_2000_01.nc Extracting: igcmlandsurf_restart_1999_12_30.nc Extracting: igcmoceansurf_restart_1999_12_30.nc Extracting: slabseaice_restart_1999_12_30.nc Extracting: spn.cost Extracting: spn.flux Extracting: spn.fofy Extracting: spn.hose Extracting: spn.opsi Extracting: spn.opsia Extracting: spn.opsip Extracting: spn.opsit Extracting: spn.psi Extracting: spn.rho Extracting: spn.s Extracting: spn.t Extracting: spn.zpsi Extracting: stderr.txt Extracting: stdout.txt Extracting: tmp.err

success =

1

resultsFiles =

Columns 1 through 5

[1x31 char]   [1x32 char]    [1x35 char]    'igcm_hs_2000_01.nc'    [1x34 char]

Columns 6 through 11

[1x35 char]   [1x32 char]    'spn.cost'    'spn.flux'    'spn.fofy'    'spn.hose'

Columns 12 through 17

'spn.opsi'   'spn.opsia'    'spn.opsip'    'spn.opsit'    'spn.psi'    'spn.rho'

Columns 18 through 23

'spn.s'   'spn.t'    'spn.zpsi'    'stderr.txt'    'stdout.txt'    'tmp.err' </ol>

The output data files are recovered to the unique local runtime directory specified in <tt>runtime.LocalRunDirUniq</tt>


 * <tt>>> dir(runtime.LocalRunDirUniq)

<ol> .                                               ..                                               fixedchem_restart_1999_12_30.nc                  fixedicesheet_data_1999_12_30.nc                 fixedicesheet_restart_1999_12_30.nc              fort.12 fort.13 fort.14 fort.7 fort.8 genie_ig_go_sl_gaalbedofluxcorr1_archive.tar.gz goin_GOLD igcm_hs_2000_01.nc                              igcmlandsurf_restart_1999_12_30.nc               igcmoceansurf_restart_1999_12_30.nc              run_linux.sh                                     slabseaice_restart_1999_12_30.nc                 spn.cost spn.flux spn.fofy spn.hose spn.opsi spn.opsia spn.opsip spn.opsit spn.psi spn.rho spn.s                                           spn.t                                            spn.zpsi stderr.txt stdout.txt tmp.err </ol>