4

Consider the following chunk of code:

int array[30000]

#pragma omp parallel
{
   for( int a = 0; a < 1000; a++ )
   {
       #pragma omp for nowait
       for( int i = 0; i < 30000; i++ )
       {
           /*calculations with array[i] and also other array entries happen here*/
       }
   }
}

Race conditions are not a concern in my application but I would like to enforce that each thread in the parallel regions takes care of exactly the same chunk of array at each run through the inner for loop.

It is my understanding that schedule(static) distributes the for-loop items based on the number of threads and the array length. However, it is not clear whether the distribution changes for different loops or different repetitions of the same loop (even when number of threads and length are the same).

What does the standard say about this? Is schedule(static) sufficient to enforce this?

1 Answer 1

4

I believe this quote from OpenMP Specification provides such a guarantee:

A compliant implementation of the static schedule must ensure that the same assignment of logical iteration numbers to threads will be used in two worksharing-loop regions if the following conditions are satisfied: 1) both worksharing-loop regions have the same number of loop iterations, 2) both worksharing-loop regions have the same value of chunk_size specified, or both worksharing-loop regions have no chunk_size specified, 3) both worksharing-loop regions bind to the same parallel region, and 4) neither loop is associated with a SIMD construct.

Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.