I have trouble to collect some data from all processors to the root, here is one example of what I want to do:
I have a couple of pairs (actually they are edges) in each processor and ideally want to send them to the root, or if there is no way I can send their corresponding index (one number instead of pairs.
For example:
Processor 0: sends {(0,5), (1,6)} to root, or it sould send {5,17}
Processor 1: sends {(2,3)} to root, or it sould send {14}
Processor 2: sends {} to root, or it sould send {}
Processor 3: sends {(4,0)} to root, or it sould send {20}
I am wondering what is the best way to store the pairs or numbers and send and receive them. I ideally I prefer to store them in a 2d vector since from the beginning I don`t know how much space I need, and receive them again in a 2D vector. I know it might not possible or might be very complicated.
This is a pseudocode of the procedure I am looking for but don`t know how to implement in MPI.
vector<vector<int > >allSelectedEdges;
vector<vector<int > >selectedEdgesLocal;
int edgeCount=0;
if(my_rank!=0){
for(int i = 0; i < rows; ++i)
for(int j = 0; j < nVertex; ++j)
if (some conditions)
{
vector<int> tempEdge;
tempEdge.push_back(displs[my_rank]+i);
tempEdge.push_back(j);
selectedEdgesLocal.push_back(tempEdge);
edgeCount++;
}
}
"send selectedEdgesLocal to root"
}else
{
"root recieve sselectedEdgesLocal and store in allSelectedEdges"
}
I thought about MPI_Gatherv as well but seems that doesn`t help. Got the idea from here
vector<vector<int > >selectedEdgesLocal;
int edgeCount=0;
for(int i = 0; i < rows; ++i)
for(int j = 0; j < nVertex; ++j)
if (some conditions)
{
vector<int> tempEdge;
tempEdge.push_back(displs[my_rank]+i);
tempEdge.push_back(j);
selectedEdgesLocal.push_back(tempEdge);
edgeCount++;
}
int NumdgesToAdd;
MPI_Reduce(&edgeCount, &NumdgesToAdd, 1, MPI_INT, MPI_SUM, 0, MPI_COMM_WORLD);
vector<vector<int > > allSelectedEdges(NumdgesToAdd);
int rcounts[comm_size];
int rdisp[comm_size];
int sumE=0;
for(int i=0; i<comm_size; ++i) {
rcounts[i] = edgeCount;
rdisp[i]=sumE;
sumE+=edgeCount;
}
MPI_Gatherv(&selectedEdgesLocal.front(), rcounts[my_rank], MPI_INT, &allSelectedEdges.front(), rcounts, rdisp, MPI_INT, 0, MPI_COMM_WORLD);
edgeCounton the root rank, and then you can allocateallSelectedEdgesand buildrcountsandrdisprcountsandrdispare only relevant on the root rank, so since each rank has its ownedgeCount, the root rank should know all of them.