3

I have a dynamic array of a structure in C. Say:

int n=100;
struct particle{
   double pos[3];   
   double force[3]; 
   double mass;
   int type;
};
struct particle *mypart;
mypart = (struct particle*) calloc(n,sizeof(struct particle));

In a parallel code, some operations are done on mypart[i].force[j] and at the end I need to perform MPI_Allreduce on jsut this array (maypart[i].force). By thinking about MPI_Type_create_struct and also other data_type functions, I couldn't get any working solution for passing just an array inside of structure to the other cores. Does anybody have any idea?

UPDATE: some details about the code: This is a molecular dynamics code, in which the force on each particle is due to interaction with the other particles. Aim is to split the force calculation on each core. force on ith particle may be calculated on different cores simultaneously. After force loop, forces on this particle should be summed to have a single value force (3 components) for every particle. This is done through a MPI_Allreduce + MPI_SUM function. I hope this could clarify what I'm gonna do.

2
  • It's not clear what are you trying to reduce to here; a single 3-component velocity that's the sum of all velocities, or an array of particles on one processor so that velocity of (say) particle 2 is the sum of all the velocities of the particle-2s on all other processors? Or something else? Commented Mar 21, 2015 at 17:55
  • velocity of (say) particle 2 is the sum of all the velocities of the particle-2s on all other processors. this is the case! Commented Mar 21, 2015 at 19:54

1 Answer 1

2

What you want to achieve is not impossible, but is also not trivial. First, you have to either declare a datatype that represents the whole structure type or one that holds only the forces. To construct the latter, start with three consecutive doubles at the proper displacement:

MPI_Datatype type_force;
int blen = 3;
MPI_Aint displ = offsetof(struct particle, force);
MPI_Type types = MPI_DOUBLE;

MPI_Type_create_struct(1, &blen, &displ, &types, &type_force);

The new datatype must be resized to match the extent of the C structure, so we could directly access multiple array elements:

MPI_Datatype type_force_resized;
MPI_Aint lb, extent;

MPI_Type_get_extent(type_force, &lb, &extent);
extent = sizeof(struct particle);
MPI_Type_create_resized(type_force, lb, extent, &type_force_resized);
MPI_Type_commit(&type_force_resized);

The global (all-)reduction now almost boils down to:

struct particle *particles = calloc(n, sizeof(struct particle));
MPI_Allreduce(mypart, particles, n, type_force_resized,
              MPI_SUM, MPI_COMM_WORLD);

Since MPI_(All)Reduce does not allow different MPI datatypes to be used for the source and receive buffers, one has to use an array of struct particle instead of simply double[n][3]. The result will be placed in the forces[] field of each array element.

Now, the problem is that MPI_SUM does not operate on derived datatypes. The solution is to declare your own reduction operation:

void force_sum(struct particle *in, struct particle *inout,
             int *len, MPI_Datatype *dptr)
{
   for (int i = 0; i < *len; i++)
   {
      inout[i].force[0] += in[i].force[0];
      inout[i].force[1] += in[i].force[1];
      inout[i].force[2] += in[i].force[2];
   }
}

MPI_Op force_sum_op;
MPI_Op_create(force_sum, 1, &force_sum_op);

With all the preparations outlined above, the reduction becomes:

MPI_Allreduce(mypart, particles, n, type_force_resized,
              force_sum_op, MPI_COMM_WORLD);

A much simpler variant would be if you first gather all forces in a double forces[n][3] array. Then the whole reduction operation boils down to:

double forces[n][3]; // Local forces
double total_forces[n][3]; // Total forces

... transfer mypart[i].force into forces[i] ...

MPI_Allreduce(forces, total_forces, 3*n, MPI_DOUBLE,
              MPI_SUM, MPI_COMM_WORLD);
// Done

But this method takes additional memory and needs memory copy operations.

Sign up to request clarification or add additional context in comments.

1 Comment

Thanks. I think the much simpler variant is really simpler for me as a newbie MPIer! :)

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.