When I try to reduce a large heap array with OpenMP reduction, it segment fault:
#include <stddef.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
int main() {
double *test = NULL;
size_t size = (size_t)1024 * 1024 * 16; // large enough to overflow openmp threads' stack
test = malloc(size * sizeof(double));
#pragma omp parallel reduction(+ : test[0 : size]) num_threads(2)
{
test[0] = 0;
#pragma omp critical
{
printf("frame address: %p\n", __builtin_frame_address(0));
printf("test: %p\n", test);
}
}
free(test);
printf("Allocated %zu doubles\n\n", size);
}
Please note that double *test is allocated on heap, thus not a duplication of this and this.
This example works with small size array, but segment fault with large array. The array is allocated on heap, and the system memory is enough.
Simimar issue but segment fault still happens even when the array is allocated on heap.
There are same issue on other community:
https://forums.oracle.com/ords/apexds/post/segmentation-fault-with-large-arrays-and-openmp-1728
but all the solution I found is about increase openmp stack size.
if(test == NULL), check if malloc failed in the first place.