Difference between revisions of "General use of the TELEMAC system"

From SourceWiki
Jump to navigation Jump to search
Line 72: Line 72:
 
Note that dylan has 8 cores so the system is configured to run with '''8 processors as a maximum'''.
 
Note that dylan has 8 cores so the system is configured to run with '''8 processors as a maximum'''.
  
Put "0" to run in scalar mode. "1" runs in parallel mode but with one processor only, so "0" and "1" should give the same results despite two different libraries.
+
Put "0" to run in scalar mode. "1" runs in parallel mode but with one processor only, so "0" and "1" should give the same results despite using different libraries.
  
 
Before you can run TELEMAC in parallel, you need to start the MPI daemon. Note that this needs to be done once per login, not for each job.
 
Before you can run TELEMAC in parallel, you need to start the MPI daemon. Note that this needs to be done once per login, not for each job.
Line 91: Line 91:
 
$ mpdallexit
 
$ mpdallexit
 
</pre>
 
</pre>
 +
 +
It is also possible to run TELEMAC on the University cluster, bluecrystal. This is described on another page (not finished yet but will be done soon).

Revision as of 17:07, 18 September 2008


This page describes the general use of the TELEMAC system in Geographical Sciences.

TELEMAC-2D, SISYPHE, ESTEL-2D and ESTEL-3D are available. More modules could be added if necessary. Just ask.

Linux

The TELEMAC system is installed centrally on "dylan" which the Linux operating system (CentOS). You will need to log into dylan and use linux commands to run TELEMAC jobs. Therefore it helps to practice a bit in a Linux environment. The Pragmatic Programming course might be a good place for this. Ask the scientific computer officer for pointers if you need some and get some training if required.

Environment set-up

It is very easy to configure the environment to use TELEMAC as you simply have to source central files. Simply add the following lines into your .bashrc configuration file, then log-out and back in again.

# Location of the TELEMAC system
SYSTEL90=/home/telemac
export SYSTEL90

source $SYSTEL90/intel_env
source $SYSTEL90/config/systel_env

You should then be able to "see" the Fortran compiler and the programs of the TELEMAC system, for instance:

$ which telemac2d
/home/telemac/bin/telemac2d

Note that if you log in another machine (i.e. not dylan) you might get an error message about "/home/telemac" not existing or file not found. this is normal. It does not exist of the other machine... Live with it or adapt the .bashrc so that the files are sourced only on dylan.

Test

Telemac-2d includes some test cases. Copy one into your filespace and run it:

$ cp -r /home/telemac/telemac2d/tel2d_v5p8/test.gb/hydraulic_jump .
$ cd hydraulic_jump
$ telemac2d cas.txt

If this works, you have a well configured environment. Now go and do some real work with your own files

A note about ascii and binary files

Parallel jobs

The TELEMAC is configured to run in parallel mode if requested by the user. This is actually a very simple thing to do and highly encouraged if you use large meshes and run long simulations. However, a few extra initial steps are required.

TELEMAC uses MPI for parallel operations. MPI requires a secret word in a hidden configuration file. Simply type the following instructions to create it. Note that "somethingsecret" below should contains no spaces.

$ cd
$ touch .mpd.conf
$ chmod 600 .mpd.conf
$ echo "MPD_SECRETWORD=somethingsecret " > .mpd.conf

Run the software once in scalar mode once to look at job duration, for instance:

$ cp -r /home/telemac/telemac2d/tel2d_v5p8/test.gb/cavity .
$ cd cavity/
$ telemac2d cas.txt

The example above should run in about 55s on dylan. Now edit cas.txt so that the line about the number of processors looks like:

PARALLEL PROCESSORS = 8

Note that dylan has 8 cores so the system is configured to run with 8 processors as a maximum.

Put "0" to run in scalar mode. "1" runs in parallel mode but with one processor only, so "0" and "1" should give the same results despite using different libraries.

Before you can run TELEMAC in parallel, you need to start the MPI daemon. Note that this needs to be done once per login, not for each job.

$ mpd &

Then, you can now run telemac2d again:

$ telemac2d cas.txt

It should run again, faster this time, maybe 30 seconds or so instead of 55 seconds. It is not a lot faster (certainly not 8 times faster!) but this is because it's a silly example and splitting the mesh in 8 subdomains accounts for a large part of the computation time. With biggers meshes and longer sinmulations, you should get a better acceleration.

Before you log out, it is a good idea to kill the MPI daemon:

$ mpdallexit

It is also possible to run TELEMAC on the University cluster, bluecrystal. This is described on another page (not finished yet but will be done soon).