OpenMP features

General OpenMP discussion

OpenMP features

Postby rashmi_k28 » Wed Mar 12, 2008 4:22 am

Hi,

I am new to OpenMP.
1) I needed some information like why openMP is best suited for C/C++ and fortran not any other lanuguages

2) How to find the whether the loop has implicit barrier
rashmi_k28
 
Posts: 8
Joined: Wed Mar 12, 2008 4:20 am

Re: OpenMP features

Postby rashmi_k28 » Wed Mar 12, 2008 5:00 am

what is the difference between C$OMP,*$OMP and !$OMP? In which scenerio these are used
rashmi_k28
 
Posts: 8
Joined: Wed Mar 12, 2008 4:20 am

Re: OpenMP features

Postby ejd » Wed Mar 12, 2008 6:15 am

It is not that OpenMP is best suited for C/C++ and Fortran. It is that these are the languages that were chosen to look at when OpenMP was designed. They were the most common languages being used for numeric processing, didn't have native parallel support, and were implemented by the vendors that came together to try and make a "standard". There has been talk of adding it to other languages (like java) and even some implementations to show that it was possible and could be useful. Since it is now common to have multiple processors even on a desktop, the different language standards groups are looking at adding native parallel support, so I doubt that OpenMP will be extended to other languages. It is more likely that OpenMP will continue to be extended to cover other types of constructs than just loops (as version 3 has done by adding tasking).

If you look at the OpenMP Version 2.5 spec, section 2.5.1 Loop Construct states (under description):
There is an implicit barrier at the end of a loop construct unless a nowait clause is specified.


As for the differences between C$OMP,*$OMP and !$OMP, it has to do with the Fortran source form you are using. In free form and fixed form, the character "!" initiates a comment (except when it appears within a character context). In fixed form, lines beginning with a "C" or a "*" in character position 1 are comments.
ejd
 
Posts: 1025
Joined: Wed Jan 16, 2008 7:21 am

Re: OpenMP features

Postby rashmi_k28 » Wed Mar 12, 2008 11:08 pm

Thanks for the reply.

What is the main use of nowait.
for example:

#pragma omp parallel
#pragma omp for
for(i=o;i<n;i++){
neat_stuff[i];
}

Here there is a implicit barrier where i is getting incremented,
I have use nowait clause at the end.

Suppose I divide the thread as 4 and n=100; each thread gets 25 iterations.
How the i value is assigned.
Is it like this
Thread 0 gets i values from 0-25
Thread 1 gets i values from 25-50
Thread 2 gets i values from 50-75
Thread 3 gets i value from 75-99

No thread waits for the previous to complete?
rashmi_k28
 
Posts: 8
Joined: Wed Mar 12, 2008 4:20 am

Re: OpenMP features

Postby ejd » Thu Mar 13, 2008 5:47 am

What is the main use of nowait?

If you look at the OpenMP V2.5 spec, section 2.5 Work-sharing Constructs states:
However, an implied barrier exists at the end of the work-sharing region, unless a nowait clause is specified. If a nowait clause is present, an implementation may omit code to synchronize the threads at the end of the work-sharing region. In this case, threads that finish early may proceed straight to the instructions following the work-sharing region without waiting for the other members of the team to finish the work-sharing region, and without performing a flush operation ...

So for your example,

Code: Select all
#pragma omp parallel
#pragma omp for
for(i=0; i<n; i++){
  neat_stuff[i];
}

there may be no benefit in using nowait. The implicit barrier is not when i is being incremented, but when the threads are at the end of the worksharing region ready to exit the for loop.

The values you have shown for i that each thread gets are correct - if the default schedule kind is static. Note that it is implementation defined what the default schedule is. You are correct that no thread waits on a previous thread to complete it's set of iterations. However, as stated above, the threads will wait before proceeding to the code following the loop. This implicit barrier is needed if the user is doing a reduction or if the variables being used in the loop are used after the loop is finished.

So nowait could be used for code like:

Code: Select all
#pragma omp parallel
{
  #pragma omp for nowait
  for(i=0; i<n; i++) {
    neat_stuff[i];
  }
  #pragma omp for nowait
  for (i=0; i<n; i++) {
    some_other_neat_stuff[i];   // see note below
  }
}

In this example, as soon as a thread finishes with the first loop, it can start doing work on the second loop. The thing to note though, is that the work in the second loop must not be dependent on the work of the first loop or the second loop must have the same iteration count and schedule as the first (so that the threads are assigned in the same way to the threads). Otherwise you will have a data dependency and a race condition will occur.

The other thing to note in my example, is that I used a nowait on the second loop as well. This might be of some help depending on the implementation. There is an implied barrier at the end of the worksharing construct and also an implied barrier on the end of the parallel construct. Most implementations will optimize these two implied barriers into a single barrier. However, some do not and so you can reduce some of the overhead by doing this (in some cases).
ejd
 
Posts: 1025
Joined: Wed Jan 16, 2008 7:21 am


Return to Using OpenMP

Who is online

Users browsing this forum: Yahoo [Bot] and 10 guests