Here's one, relatively simple, way to do it using the multiprocessing module:
import functools
import multiprocessing
def func(arr, i):
arr[i] = arr[i] + ' hello!'
if __name__ == '__main__':
manager = multiprocessing.Manager() # Create a manager to handle shared object(s).
xyz = manager.list(['a','b','c','d','e']) # Create a proxy for the shared list object.
p = multiprocessing.Pool(processes=4) # Create a pool of worker processes.
# Create a single arg function with the first positional argument (arr) supplied.
# (This is necessary because Pool.map() only works with functions of one argument.)
mono_arg_func = functools.partial(func, xyz)
p.map(mono_arg_func, range(len(xyz))) # Run func in parallel until finished
for i in xyz:
print(i)
Output:
a hello!
b hello!
c hello!
d hello!
e hello!
Note this is not going to very fast if the list is huge because sharing access to large objects requires a lot of overhead between separate tasks (which run in different memory spaces).
A better approach would use a multiprocessing.Queue which is implemented "using a pipe and a few locks/semaphores" according to the documentation (as opposed to a shared list object whose entire contents will have to be pickled and unpickled multiple times).
a hello! b hello! c hello! d hello! e hello!but I want to do it in parallel since the size of list could contain thousands of elementsmultiprocessing.Pool