2

I'm trying to create a program in regular C that divides an integer array equally between any amount of process. For debugging purposes I'm using an integer array with 12 numbers and only 2 process so that the master process will have [1,2,3,4,5,6] and the slave1 will have [7,8,9,10,11,12]. However I'm getting an error saying: MPI_ERR_BUFFER: invalid buffer pointer.

After some research I found out that there is a function that does that (MPI_Scatter). Unfortunately, since I'm learning MPI the implementation is restricted to MPI_Send and MPI_Recv only. Anyway, both MPI_Send and MPI_Recv use a void*, and I'm sending a int* so it should work. Can anyone point out what am I doing wrong? Thank you.

int* create_sub_vec(int begin, int end, int* origin);
void print(int my_rank, int comm_sz, int n_over_p, int* sub_vec);

int main(void){

    int comm_sz;
    int my_rank;

    int vec[12] = {1,2,3,4,5,6,7,8,9,10,11,12};
    int* sub_vec = NULL;
    int n_over_p;

    MPI_Init(NULL, NULL);
    MPI_Comm_size(MPI_COMM_WORLD, &comm_sz);
    MPI_Comm_rank(MPI_COMM_WORLD, &my_rank);    

    n_over_p = 12/comm_sz;
    printf("Process %d calcula n_over_p = %d\n", my_rank, n_over_p);

    if (my_rank != 0) {
        MPI_Recv(sub_vec, n_over_p, MPI_INT, 0, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
        print(my_rank, comm_sz, n_over_p, sub_vec);

    } else {

        printf("Distribuindo dados\n");
        for (int i = 1; i < comm_sz; i++) {
            sub_vec = create_sub_vec(i*n_over_p, (i*n_over_p)+n_over_p, vec);
            MPI_Send(sub_vec, n_over_p, MPI_INT, i, 0, MPI_COMM_WORLD);
        }
        printf("Fim da distribuicao de dados\n");

        sub_vec = create_sub_vec(0, n_over_p, vec);

        print(my_rank, comm_sz, n_over_p, sub_vec);
    }

    MPI_Finalize();
    return 0;

}

int* create_sub_vec(int begin, int end, int* origin){
    int* sub_vec;
    int size;
    int aux = 0;
    size = end - begin;
    sub_vec = (int*)malloc(size * sizeof(int));
    for (int i = begin; i < end; ++i) {
        *(sub_vec+aux) = *(origin+i);
        aux += 1;
    }
    return  sub_vec;
}

void print(int my_rank, int comm_sz, int n_over_p, int* sub_vec){
    printf("Process %d out of %d received sub_vecotr: [ ", my_rank, comm_sz);
    for (int i = 0; i < n_over_p; ++i)
    {
        printf("%d, ", *(sub_vec+i));
    }
    printf("]\n");
}
2
  • Are you trying to implement parallel sorting? Commented Sep 27, 2017 at 0:02
  • No, it is a simple exercice to sum the elements of a vector in parallel. Commented Sep 27, 2017 at 1:00

1 Answer 1

4

The issue is that sub_vec is not allocated on non zero rank. It is up to you to do that (e.g. MPI does not allocate the receive buffer).

the receive part should look like

if (my_rank != 0) {
    sub_vec = (int *)malloc(n_over_p * sizeof(int));    
    MPI_Recv(sub_vec, n_over_p, MPI_INT, 0, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
}

As you wrote, the natural way is via MPI_Scatter() (and once again, it is up to you to allocate the receive buffer before starting the scatter.

Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.