I'm running into an error in using multiprocessing.shared_memory to back a numpy array.
Here's my usage pattern:
# demo.py
from contextlib import contextmanager
from multiprocessing.managers import SharedMemoryManager
from multiprocessing.shared_memory import SharedMemory
from typing import Iterator
import numpy as np
@contextmanager
def allocate_shared_mem() -> Iterator[SharedMemory]:
with SharedMemoryManager() as smm:
shared_mem = smm.SharedMemory(size=80)
yield shared_mem
with allocate_shared_mem() as shared_mem:
shared_arr = np.frombuffer(shared_mem.buf)
assert len(shared_arr) == 10
And here is the issue I'm seeing:
$ python demo.py
Exception ignored in: <function SharedMemory.__del__ at 0x7ff8bf604ee0>
Traceback (most recent call last):
File "/home/rig1/miniconda3/envs/pysc/lib/python3.10/multiprocessing/shared_memory.py", line 184, in __del__
File "/home/rig1/miniconda3/envs/pysc/lib/python3.10/multiprocessing/shared_memory.py", line 227, in close
BufferError: cannot close exported pointers exist
I think the problem is that the SharedMemoryManager cannot deallocate shared_mem while the numpy array exists. If I append del shared_array to the end of the with block at the bottom of demo.py, the issue goes away.
How can I use numpy arrays backed by shared memory without having to manually account for array deletion? Are there any clean patterns or array-invalidation tricks to handle this case?
I'm worried that some other part of my code will grab a handle to the array and it will be a hassle to figure out what object should be deleted to avoid the "cannot close exported pointers exist" error. It's fine with me if the numpy array becomes invalid after the with block exits.