1

I have a fortran code in which I have a parallel part. It is compiled by gfortran successfully, but I get a segmentation fault by running it. The serial compiled run file does not show any fault. Also I have examined the parallel program with very small input matrix (rho1 & rho2) and tested the step by step parameters. There were no fault. If I understand correctly, when I determine the variables as PRIVATE, there is no need to use $OMP ATOMIC. Here the matrices rho1 and rho2, have a dimension of about 15,000,000. Here is the parallel part of code:

!$OMP PARALLEL DO ORDERED DEFAULT(PRIVATE)
  do ix = 1 , nx
    do iy = 1 , ny
      do iz = 1 , nz
         k = iz + (iy-1) * nz + (ix-1) * ny * nz
         if (rho1(k) .GT. 0.d0) then
           x1 = x0 + ((ix-1) * dx)
           y1 = y0 + ((iy-1) * dy)
           z1 = z0 + ((iz-1) * dz)
           rr = (x1-xa)**2 + (y1-ya)**2 + (z1-za)**2
           r1a = dsqrt (rr)
           rr = (x1-xb)**2 + (y1-yb)**2 + (z1-zb)**2
           r1b = dsqrt (rr)
           if (r1a == 0.d0) Vnuc = (rho1(k) * Znb)/r1b
           if (r1b == 0.d0) Vnuc = (rho1(K) * Zna)/r1a
           if (r1a .GT. 0.d0 .AND. r1b .GT. 0.d0) then
             Vnuc = (rho1(k) * Zna)/r1a + (rho1(K) * Znb)/r1b
           endif
           Ve = 0
           !$OMP ORDERED
           j = 1
           do jx = 1 , nx
            do jy = 1 , ny
             do jz = 1 , nz
               if (rho2(j) .GT. 0.d0) then
                x2 = x0 + ((jx-1) * dx)
                y2 = y0 + ((jy-1) * dy)
                z2 = z0 + ((jz-1) * dz)
                rr= (x1-x2)**2 + (y1-y2)**2 + (z1-z2)**2
                r12 = dsqrt (rr)
                if (r12 .GT. 0.d0) then
                 Ve = Ve + (rho1(k)*rho2(j))/r12
                endif
               endif
               j = j + 1
             enddo
            enddo
           enddo
           !$OMP END ORDERED
           V1 = (Ve * dx * dy * dz * 0.529177d0) - Vnuc
           rr = (x1-xmid)**2 + (y1-ymid)**2 + (z1-zmid)**2
           r = dsqrt (rr)
           zef1(k) = V1 * r            
          endif
       enddo
    enddo
  enddo
  !$OMP END PARALLEL DO
3
  • as a start, did you try to compile with e.g. gfortran -fbounds-check -O0 -g -ggdb or something and then run? Commented Nov 3, 2013 at 14:03
  • a second remark: shouldn't ny and nz be firstprivate ? Commented Nov 3, 2013 at 15:23
  • More of the variables should be probably shared. Commented Nov 3, 2013 at 15:40

3 Answers 3

2

It looks as if your arrays rho1 and rho2 are declared private. One consequence of this is that each thread will, on entry to the parallel region, have a private instance of those arrays. If the arrays are large your program may simply be trying to allocate more memory than is available.

It's relatively unusual to write OpenMP programs on large arrays which are not shared; multiple threads operating on different parts of large shared arrays is probably the canonical application for OpenMP.

Sign up to request clarification or add additional context in comments.

1 Comment

I tried making rho1 and rho2 variables shared, but the fault is the same. Also I have tried leaving all variables shared and the error message was the same, although I traced the program with a simple input and the process went correct.
1

I found the solution from some other posts on the website, "Why Segmentation fault is happening in this openmp code". The problem was due to the limitation on stack size. It was solved by the following command: ulimit -s unlimited

Comments

1

One issue may be also that when variables are declared private, each thread creates an unitialized private replica of the object. You may need to include the copyin openmp statement so that the private variables have the value of the variable before the parallelization region.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.