Difference between revisions of "General use of the TELEMAC system"

From SourceWiki
Jump to navigation Jump to search
Line 4: Line 4:
 
This page describes the general use of the TELEMAC system in Geographical Sciences.
 
This page describes the general use of the TELEMAC system in Geographical Sciences.
  
The TELEMAC system is installed centrally on "dylan". You will need to log into dylan to run TELEMAC jobs. Therefore it helps to practice a bit in a Linux environment. The [Category:Pragmatic_Programming] course can be useful.
+
The TELEMAC system is installed centrally on "dylan". You will need to log into dylan to run TELEMAC jobs. Therefore it helps to practice a bit in a Linux environment. The [[:category:Pragmatic Programming | Pragmatic Programming]] course might be a good place for this.
  
 
TELEMAC-2D, SISYPHE, ESTEL-2D and ESTEL-3D are available. More modules could be added if necessary. Just ask.
 
TELEMAC-2D, SISYPHE, ESTEL-2D and ESTEL-3D are available. More modules could be added if necessary. Just ask.
  
 
= Environment set-up =
 
= Environment set-up =
 +
 +
It is very easy to configure the environment to use TELEMAC as you simply have to source central files. Simply add the following lines into your .bashrc configuration file, then log-out and back in again.
  
 
<pre>
 
<pre>
Line 18: Line 20:
 
source $SYSTEL90/config/systel_env
 
source $SYSTEL90/config/systel_env
 
</pre>
 
</pre>
 +
 +
You should then be able to "see" the Fortran compiler and the programs of the TELEMAC system.
  
 
= Test =  
 
= Test =  
 +
 +
Telemac-ed includes some test cases. Copy one into your filespace and run it:
  
 
<pre>
 
<pre>
Line 26: Line 32:
 
$ telemac2d cas.txt
 
$ telemac2d cas.txt
 
</pre>
 
</pre>
 +
 +
If this works, you have a well configured enviroment. Now go and do some real work...
  
 
= Parallel jobs =
 
= Parallel jobs =
The TELEMAC is automatically configured to run in parallel mode. To run  
+
The TELEMAC is configured to run in parallel mode if requested by the user. This is actually a very simple thing to do and highly encouraged if you use large meshes and run long simulations. A few extra initial steps are required.
 +
 
 +
TELEMAC uses MPI for parallel operations. MPI requires a secret word in a hidden configuration file. Simply type the following instructions to create it. Note that "somethingsecret" below should contains no spaces.
  
 
<pre>
 
<pre>
cd $HOME
+
$ cd
  touch .mpd.conf
+
$ touch .mpd.conf
  chmod 600 .mpd.conf
+
$ chmod 600 .mpd.conf
  MPD_SECRETWORD="somethingsecret"
+
$ echo "MPD_SECRETWORD=somethingsecret " > .mpd.conf
 
</pre>
 
</pre>
  
Run the software once in scalar mode
+
Run the software once in scalar mode once to look at job duration, for instance:
 
<pre>
 
<pre>
 
$ cp -r /home/telemac/telemac2d/tel2d_v5p8/test.gb/cavity .
 
$ cp -r /home/telemac/telemac2d/tel2d_v5p8/test.gb/cavity .
[ggjpr@dylan ~]$ cd cavity/
+
$ cd cavity/
[ggjpr@dylan confluence]$ telemac2d cas.txt
+
$ telemac2d cas.txt
 +
</pre>
 +
 
 +
The example above should run in about 55s on dylan. Now edit cas.txt so that the line about the number odf processors looks like:
 +
<pre>
 +
PARALLEL PROCESSORS = 8
 
</pre>
 
</pre>
  
Should run in about 55s
+
Before you can run TELEMAC in parallel, you need to start the MPI daemon. This needs to be done once per login.
Edit cas.txt and run telemac2d again and
+
 
Start the ring
 
 
<pre>
 
<pre>
 
$ mpd &
 
$ mpd &
 +
</pre>
 +
 +
Then, you can now run teleun telemac2d again:
 +
<pre>
 
$ telemac2d cas.txt
 
$ telemac2d cas.txt
 
</pre>
 
</pre>
it should run again, faster, maybe 30s inclu
 
  
when you log out, it is a good idea to kill the MPI daemon with
+
It should run again, faster, maybe 30 seconds or so. It is not a lot faster because it's a silly example and splitting the mesh in 8 subdomains accounts for a large part of the computation time.
 +
 
 +
Before you log out, it is a good idea to kill the MPI daemon:
 
<pre>
 
<pre>
 
$ mpdallexit
 
$ mpdallexit
 
</pre>
 
</pre>

Revision as of 16:46, 18 September 2008


This page describes the general use of the TELEMAC system in Geographical Sciences.

The TELEMAC system is installed centrally on "dylan". You will need to log into dylan to run TELEMAC jobs. Therefore it helps to practice a bit in a Linux environment. The Pragmatic Programming course might be a good place for this.

TELEMAC-2D, SISYPHE, ESTEL-2D and ESTEL-3D are available. More modules could be added if necessary. Just ask.

Environment set-up

It is very easy to configure the environment to use TELEMAC as you simply have to source central files. Simply add the following lines into your .bashrc configuration file, then log-out and back in again.

# Location of the TELEMAC system
SYSTEL90=/home/telemac
export SYSTEL90

source $SYSTEL90/intel_env
source $SYSTEL90/config/systel_env

You should then be able to "see" the Fortran compiler and the programs of the TELEMAC system.

Test

Telemac-ed includes some test cases. Copy one into your filespace and run it:

$ cp -r /home/telemac/telemac2d/tel2d_v5p8/test.gb/hydraulic_jump .
$ cd hydraulic_jump
$ telemac2d cas.txt

If this works, you have a well configured enviroment. Now go and do some real work...

Parallel jobs

The TELEMAC is configured to run in parallel mode if requested by the user. This is actually a very simple thing to do and highly encouraged if you use large meshes and run long simulations. A few extra initial steps are required.

TELEMAC uses MPI for parallel operations. MPI requires a secret word in a hidden configuration file. Simply type the following instructions to create it. Note that "somethingsecret" below should contains no spaces.

$ cd
$ touch .mpd.conf
$ chmod 600 .mpd.conf
$ echo "MPD_SECRETWORD=somethingsecret " > .mpd.conf

Run the software once in scalar mode once to look at job duration, for instance:

$ cp -r /home/telemac/telemac2d/tel2d_v5p8/test.gb/cavity .
$ cd cavity/
$ telemac2d cas.txt

The example above should run in about 55s on dylan. Now edit cas.txt so that the line about the number odf processors looks like:

PARALLEL PROCESSORS = 8

Before you can run TELEMAC in parallel, you need to start the MPI daemon. This needs to be done once per login.

$ mpd &

Then, you can now run teleun telemac2d again:

$ telemac2d cas.txt

It should run again, faster, maybe 30 seconds or so. It is not a lot faster because it's a silly example and splitting the mesh in 8 subdomains accounts for a large part of the computation time.

Before you log out, it is a good idea to kill the MPI daemon:

$ mpdallexit