Job preparation

To run a job on the cluster, prepare the data files on your local machine, then copy it to the cluster. Beside the data files, you will need a job file with the settings as outlined below.

Name your job and data files in a consistent way:

  • Job file: myjobfile.sh
  • User: user01
  • Email: max@mustermann.de
  • Job files are written as a sort of “shell” commands that are common on Linux systems
  • Lines starting with “##” or “# ” are comment lines to document some stuff for better reading afterwards
  • Lines starting with “#$” are commands for the job scheduler “qsub”
  • Lines starting not with “#” are commands executed
  • Make sure that the skript does not contain empty lines! Delete them or comment them out using “#”

Examples

You may use one of the following examples as a template for you job. Lines that need to be modified are highlighted. Make sure to change username, email adress, filenames, and, if necessary, program options. The last line of each script calls the program you want to run, including all parameters. The number of CPUs is stored in $NSLOTS and is set later when starting the job.

Beast

#!/bin/csh
#
## User name (Which account to be charged cpu time)
#$ -A user01
#
## Send email to users
#$ -M max@mustermann.de
#
## Send mail at beginning/end/on suspension
#$ -m bes
#
## Export these environmental variables
#$ -v PVM_ROOT,LD_LIBRARY_PATH=/share/apps/beaglenew
#
## The job is located in the current working directory.
#$ -cwd
## Filenames for output and error log files
#$ -o myjobfile.out
#$ -e myjobfile.err
#
env >./xenv
/share/apps/beast/bin/beast -beagle -beagle_CPU -beagle_instances $NSLOTS -overwrite ./myjobfile.xml > ./myjobfile.out

Beast2

#!/bin/csh
#
## User name (Which account to be charged cpu time)
#$ -A user01
#
## Send email to users
#$ -M max@mustermann.de
#
## Send email at beginning/end/on suspension
#$ -m bes
#
## Export these environmental variables
#$ -v PVM_ROOT,LD_LIBRARY_PATH=/share/apps/beaglenew
#
## The job is located in the current working directory.
#$ -cwd
## Filenames for output and error log files
#$ -o myjobfile.out
#$ -e myjobfile.err
#
env >./xenv
/share/apps/beast210/bin/beast -beagle -beagle_CPU -beagle_instances $NSLOTS -overwrite ./myjobfile.xml > ./myjobfile.out

Mr Bayes

#!/bin/csh
#
## User name (Which account to be charged cpu time)
#$ -A user01
#
## Send email to users
#$ -M max@mustermann.de
#
## Send email at beginning/end/on suspension
#$ -m bes
#
## Export these environmental variables
#$ -v PVM_ROOT,LD_LIBRARY_PATH=/share/apps/beaglenew
#
## The job is located in the current working directory
#$ -cwd
## Filenames for output and error log files
#$ -o myjobfile.out
#$ -e myjobfile.err
#
env >./xenv
mpirun -np $NSLOTS /share/apps/mrbayes/mb ./beispiel.nex </dev/null

RAxML

#!/bin/csh
#
## User name (Which account to be charged cpu time)
#$ -A user01
#
## Send email to users
#$ -M max@mustermann.de
#
## Send email at beginning/end/on suspension
#$ -m bes
#
## Export these environmental variables
#$ -v PVM_ROOT
#
## The job is located in the current working directory.
#$ -cwd
## Filenames for output and error log files
#$ -o myjobfile.out
#$ -e myjobfile.err
#
env >./xenv
mpirun -np $NSLOTS /share/apps/raxml/raxmlHPC-MPI-SSE3.icc -s sequencefile.phy -n outputfile.phy -m PROTGAMMAWAG

Exabayes

#!/bin/csh
#
## User name (Which account to be charged cpu time)
#$ -A user01
#
## Send email to users
#$ -M max@mustermann.de
#
## Send email at beginning/end/on suspension
#$ -m bes
#
## Export these environmental variables
#$ -v PVM_ROOT
#
## The job is located in the current working directory.
#$ -cwd
## Filenames for output and error log files
#$ -o myjobfile.out
#$ -e myjobfile.err
#
env >./xenv
mpirun -np $NSLOTS /share/apps/exabayes/exabayes -f aln.phy -q aln.part -n myRun -s 57913 -c config.nex -R 2 -C 2

Iq-Tree

#!/bin/csh
#
## User name (Which account to be charged cpu time)
#$ -A user01
#
## Send email to users
#$ -M max@mustermann.de
#
## Send email at beginning/end/on suspension
#$ -m bes
#
## Export these environmental variables
#$ -v PVM_ROOT
#
## The job is located in the current working directory.
#$ -cwd
## Filenames for output and error log files
#$ -o myjobfile.out
#$ -e myjobfile.err
#
env >./xenv
/share/apps/iqtree/iqtree-omp -omp $NSLOTS -s example.phy -m TEST