0

How can I thread the nested for-loop below safely in order to run the programme in parallel on a core with 8 threads and still output data in the correct order. I have tried using the #pragma omp for command but that gives me an error message: work-sharing region may not be closely nested inside of work-sharing, critical or explicit task region.

Note: This code is for an introduction to parallel programming, so it is poorly written for the sake of being optimised

#pragma omp parallel private(t, i, j) shared(nx, ny, nt)
{
    // main loop

    for (int t = 0; t < nt; t++)
    {
        cout << "\n" << t;
        cout.flush();

        // first block 

        for (int i = 0; i < nx; i++)
        {
            for(int j=0; j < ny ;j++)
            {
                if (i> 0 && i < nx - 1 && j >0 && j < ny - 1) 
                {
                    vr[i][j] = (vi[i+1][j]+vi[i-1][j]+vi[i][j-1]+vi[i][j+1]) / 4.;
                } 
                else if (i == 0 && i < nx - 1 && j > 0 && j < ny - 1) 
                {
                    vr[i][j] = (vi[i+1][j]+10.+vi[i][j-1]+vi[i][j+1]) / 4.;
                } 
                else if (i > 0 && i == nx - 1 && j > 0 && j < ny - 1) 
                {
                    vr[i][j] = (5.+vi[i-1][j]+vi[i][j-1]+vi[i][j+1]) / 4.;
                } 
                else if (i > 0 && i < nx - 1 && j == 0 && j < ny - 1) 
                {
                    vr[i][j] = (vi[i+1][j]+vi[i-1][j]+15.45+vi[i][j+1]) / 4.;
                } 
                else if (i > 0 && i < nx - 1 && j > 0 && j == ny - 1) 
                {
                    vr[i][j] = (vi[i+1][j]+vi[i-1][j]+vi[i][j-1]-6.7) / 4.;
                }
            }
        }

        // second block

        for (int i = 0; i < nx; i++) 
        {
            for (int j = 0; j < ny; j++)
            {
                if (fabs(fabs(vr[i][j]) - fabs(vi[i][j])) < 1e-2) 
                {
                    fout << "\n" << t << " " << i << " " << j << " "
                         << fabs(vi[i][j]) << " " << fabs(vr[i][j]);
                }
            }

            #pragma omp for schedule(static,100)

            // third block

            for (int i = 0; i < nx; i++) 
            {
                for (int j = 0; j < ny; j++) 
                {
                    vi[i][j] = vi[i][j] / 2. + vr[i][j] / 2.;
                }
            }
        }
    }
4
  • 4
    Can you tidy up your formatting a bit? This is unreadable. You should also clarify what you mean by "just gets things messed up" as that's not a very technical problem description. Commented Feb 21, 2014 at 19:21
  • Does the added error message give you a better understanding to my problem? as for the issue of clarity, The for loop counting the incrementation of 't' is the outer most loop, with the others nested within. hope that helps Commented Feb 21, 2014 at 19:55
  • It would help if you tidied up your formatting a bit. Commented Feb 21, 2014 at 19:59
  • I tidied up your formatting. Commented Feb 24, 2014 at 5:05

1 Answer 1

5

You cannot nest OMP regions in this way. From the OMP documentation (Intel):

Two OpenMP constructs are improperly (dynamically) nested. The OpenMP specification imposes several restrictions on how OpenMP constructs can be dynamically nested, that is, which OpenMP constructs can be legally encountered during execution of another region. OpenMP parallel regions can be nested within one another, but some restrictions apply. Generally speaking, two parallel regions can only be nested if there is an intermediate single threaded region, as created by a SINGLE, CRITICAL, or MASTER directive.

To be precise, the following restrictions apply. In the following, the term "worksharing region" is shorthand for any one of the following constructs: loop (FOR/DO), SECTIONS, SINGLE, or WORKSHARE. The term "closely nested region" means a region that is dynamically nested inside another region with no parallel region nested between them.

  • A worksharing region may not be closely nested inside a worksharing, explicit TASK, CRITICAL, ORDERED, or MASTER region.
  • A BARRIER region may not be closely nested inside a worksharing, explicit TASK, CRITICAL, ORDERED, or MASTER region.
  • A MASTER region may not be closely nested inside a worksharing or explicit TASK region.
  • An ORDERED region may not be closely nested inside a CRITICAL or explicit TASK region.
  • An ORDERED region must be closely nested inside a loop region (or parallel loop region) with an ORDERED clause.
  • A CRITICAL region may not be nested (closely or otherwise) inside a CRITICAL region with the same name (although violations of this restriction are reported as a different error type than this one).

Similar questions have been asked and answered before on SO.

OpenMP, for loop inside section

OpenMP for loop with master region: "master region may not be closely nested inside of work-sharing or explicit task region"

Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.