1

I have a long skinny numpy array (dim=(4096*4096,1)) which needs to be read by multiple MPI processes (using mpi4py) and they do some operations on them independently. But while loading such large array by each process should be heavy on memory. Is there a way to have/use shared memory (maybe be it is allocated initially and not touched afterwards other than only the MPI processes will read from the same location, i.e. read-only access)? It maybe be possible with python-multiprocessing but what about mpi4py (thanks in advance)?

1

0

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.