Difference between revisions of "OpenMP"
m (Protected "Parallel" [edit=sysop:move=sysop]) |
|
(No difference)
|
Revision as of 15:00, 6 November 2008
'Parallel: Using more than one processor at a time'
Introduction
I'd rather have a computer that not. They're handy for email and buying stuff from Amazon. Definitely. Indeed, for people of a certain mindset--people like you and me--we can do all sorts of interesting things like simulating the natural world, and in the process look at questions like, "will Greenland melt?" and "what would happen if it did?".
Sometimes it's handy to have more than one computer. Let's say that we have a new whizz-bang weather model that takes 26 hours to work out what the weather will do tomorrow. "All very well", you say, "but about as much use as a chocolate teapot." In order for the model to be of any use, we need it to run faster. We need to divide up the work it does and run it over two, or more computers. We need to enter the world of parallel programming.
"Hippee!" we cry, but a word of caution. Getting models to work in parallel is a lot, I say it again, a lot harder than getting them to work on a single processor. Before setting out down the road, it is well worth checking that you really do need your model to run faster, and that you've explored all avenues in that regard.
You still with us? OK, let's get stuck in.
OpenMP
There are a number of different ways to create parallel programs, and we're going to start with one approach, called OpenMP. There a number of reasons for this:
- It's pretty widely available
- It's good for the muli-core processors that we find in a lot of computers today
- It's fairly easy to use
- and it's based upon the not-so-minde-bending concept of threads
At this point, we could launch overselves into a long and detailed discussion of threads, the OpenMP runtime environment, pre-processor macro statements and the like. But we won't. Because it's less fun. Let's just try an example instead.
OK, to get the examples, login to a Linux box and cut & paste the below onto the command line:
svn co http://source.ggy.bris.ac.uk/subversion-open/parallel ./parallel
Hello, world
Right, now do the following:
cd examples/example1 make ./omp_hello_f90.exe
Tada! Just like the old classic 'hello, world', but this time run in parallel on as many processors as you have available on your machine. Good eh? Now, how did that all work?
Take a look inside the file omp_hello_f90.f90. First up we have used a Fortran90 module containing routines specific to OpenMP:
use omp_lib
This gives us access to routines like:
omp_get_num_threads()
The rest of the program is straightforward Fortran code, except for some comment lines starting with !$omp, such as:
!$omp parallel private(nthreads, tid) ... !$omp end parallel
These lines specify that the master thread should fork a parallel region. From the code, you will see that all the threads on the team will get their thread ID--through calls to the OpenMP library--and will print it. The master thread will also ask how many threads have been spawned by the OpenMP runtime environment, and will print the total.
Notice that the variables nthreads and tid have been marked as private. This means that seperate copies of the these variables will be kept for each thread. This is essential, or else the print statement, 'Hello, world from thread = ' would get all mixed up, right? Try deliberatly mucking things up. Go on, see what happens if you delete tid from the private list.
Look inside the Makefile and notice that the use of OpenMP has been flagged to the compiler. -fopenmp in this case, as we are use gfortran. It would be just -openmp if you were using ifort. You would get a compile-time error, if you were to try to compile the code without this flag.
There is also a C version of the Fortran90 example in omp_hello_c.c.
Work Sharing inside Loops
cd ../example2