How to run ESTEL in parallel

This article describes how to run parallel jobs in ESTEL on "simple" networks of workstations.

Note that the methodology differs slightly for real high performance facilities such as Blue Crystal or other Beowulf clusters. Therefore, there is a dedicated article for clusters.

We call a network of workstations a set of workstations which can "talk" to each other via Intra/Internet.

= Pre-requesites =
 * You need to have a working MPI configuration on the network of workstations. See the article about installing MPI.
 * The parallel library in the TELEMAC tree needs to have been compiled. See the article about installing the TELEMAC system.

= The  file = The TELEMAC script look for a file called  for the MPI configuration. This file can either be (a) a data file in the directory where the steering file for the simulation is or (b) a global configuration file. If the global configuration is chosen, the file needs to be installed in  where   is the string entered in the    configuration file. Note that if you have a global, you can override it by using a local one in the folder of the steering file for the simulation.

contains a simple list of hosts with their number of processors. The total number of processors is written at the top of the file. An example is provided in the  of the TELEMAC tree:

When running ESTEL in parallel mode, the number of processors requested in the steering file must be smaller than or equal to the number of processors in.

= Run a parallel job on one machine = Before running distributed parallel jobs, it is easier to get it to work on one machine first.

Using one process
Before running ESTEL in parallel, you need to start a  process. Details are given in the MPI article. Just start  with:

Before trying to run real parallel jobs it is interesting to check that the "parallel" library and ESTEL are playing nicely together. This can be achieved by running an existing test case with in which you add the following keyword in the steering file:

Using the keyword  will force ESTEL to use the   library instead of the   library. As we request one processor only, no MPI calls will be done.

If this does not work. Stop here and try to understand what is going wrong. You can email error messages (in full) to JP Renaud who will help if necessary.

Using multiple processes
If it works fine with one process, you can try with several. Make sure first that the  file contains enough entries for the number of processes you will request. As there is just one host available to MPI, just repeat its entry several times. For instance if asking for three processes, the  file should contain:

Now edit the steering file of your test case to ask for 3 processors:

If ESTEL ran properly, you should have several new files in your directory (plus the required result files). The meaning of these files is explained further down.

Remember to end the ring after the computation has finished:

= Run a parallel job on several machines = If MPI has been setup properly, running ESTEL on several machines is not very complicated.

Start a  ring requesting the right number of processors:

Then adjust  to match the ring and run ESTEL requesting no more processors than in the   file. Note that your  file can contain many more processors than there are hosts in the ring. For instance if you use dual processor machines, you could have a ring with three machines but six processors in :

Remember to close the ring after the simulation with.

= Note about parallel output = When you run ESTEL-2D or ESTEL-3D in parallel, you will obtain some new files in the directory where the simulation was run:
 * contains the log of the domain decomposition step at the very beginning of the simulation.
 * contains the log of the domain recomposition step at the end of the simulation. Not for ESTEL-3D, see note below.
 * contains the hosts that have been used by MPI
 * series of  files. Each of these files is the listing output of ESTEL on the compute nodes, N is the total number of processors required and M the number of the host the log comes from. Note that numbering starts at zero and the log for the master node (M=0) is not kept as you see it on the screen. Therefore for 4 processors, there would be 3 files named:

There are extra files for ESTEL-3D, see below.

= Note about ESTEL-3D = For estel3d, you will also have:
 * series of
 * series of    are empty at the moment, BUG

This is because ESTEL-3D does not recompose the solution on one mesh due to limitation in the binary Tecplot library. You will need to load all these files at once in Tecplot (option "Load Multiple Files") to see the full solution (or the full mesh). Note that this creates interpolation artefacts for P0 variables. This will be "fixed" in the next version of ESTEL which will use a native format instead of the Tecplot format and therefore will be able to do domain recomposition. In Tecplot, be careful to change the default filter as the numbering of the files hides the typical  or   extension.

Also, as there is no domain recomposition, there is no  for ESTEL-3D.

= Note about estel2d = To finish, Fabien??
 * No "validation" possible, will crash with no warning. probably a bug!!
 * Problem with particle tracking
 * second keyword required
 * dictionary to be changed