Final note
Leaving my tests for posterity, but tel has the answer.
Note
The below test results are on Debian. Testing on Ubuntu (WSL) is indeed much worse. On Ubuntu n=193 for any shape crashes (also if I replace the 3d n with 1), and any n above. Looks like (see bla.py below):
py bla.py n 1 allocates 3204 on A, 29323 ob B for al 0<n<193
- For
n>=193 a segmentation fault occurs on B, and 3208 are allocated on A. Apparently there is some hard memory limit somewhere in ubuntu.
The old tests on Debian
After some testing it looks to my like a memory issue, with a weird scaling of memory allocations with dimension.
The edit with only 2 dimensions does not crash for me, but 3 do - I will answer assuming this.
For me:
b = sharedctypes.RawArray(a._type_, a)
will not crash if:
a = np.ctypeslib.as_ctypes(np.zeros((224**3))) #Though generating b takes a while
a = np.ctypeslib.as_ctypes(np.zeros((100,100,100)))
So it seems less demand for memory removes the problem, but oddly the same amount of needed cells in a one dimensional array works fine - so something deeper in the memory seems to be going on.
Of course you are using pointers. Let's try some things (bla.py):
import tracemalloc
import numpy as np
from sys import argv
from multiprocessing import sharedctypes
n,shape = (int (x) for x in argv[1:])
if shape == 1: shape = n
if shape == 2: shape = (n**2,n)
if shape == 3: shape = (n,n,n)
tracemalloc.start()
a = np.ctypeslib.as_ctypes(np.zeros(shape))
x=tracemalloc.take_snapshot().statistics('lineno')
print(len(x),sum((a.size for a in x)))
b = sharedctypes.RawArray(a._type_, a)
x=tracemalloc.take_snapshot().statistics('lineno')
print(len(x),sum((a.size for a in x)))
Resulting in:
n shape (a mallocs sum) (b mallocs sum)
>py bla.py 100 1 => 5 3478 76 30147
>py bla.py 100 2 => 5 5916 76 948313
>py bla.py 100 3 => 5 8200 76 43033
>py bla.py 150 1 => 5 3478 76 30195
>py bla.py 150 2 => 5 5916 76 2790461
>py bla.py 150 3 => 5 8200 76 45583
>py bla.py 159 1 => 5 3478 76 30195
>py bla.py 159 2 => 5 5916 76 2937854
>py bla.py 159 3 => 5 8200 76 46042
>py bla.py 160 1 => 5 3478 76 30195
>py bla.py 160 2 => 5 5916 72 2953989
>py bla.py 160 3 => 5 8200 Segmentation fault
>py bla.py 161 1 => 5 3478 76 30195
>py bla.py 161 2 => 5 5916 75 2971746
>py bla.py 161 3 => 5 8200 75 46116
>py bla.py 221 1 => 5 3478 76 30195
>py bla.py 221 2 => 5 5916 76 5759398
>py bla.py 221 3 => 5 8200 76 55348
>py bla.py 222 1 => 5 3478 76 30195
>py bla.py 222 2 => 5 5916 76 5782877
>py bla.py 222 3 => 5 8200 76 55399
>py bla.py 223 1 => 5 3478 76 30195
>py bla.py 223 2 => 5 5916 76 5806462
>py bla.py 223 3 => 5 8200 76 55450
>py bla.py 224 1 => 5 3478 76 30195
>py bla.py 224 2 => 5 5916 72 5829381
>py bla.py 224 3 => 5 8200 Segmentation fault
>py bla.py 225 1 => 5 3478 76 30195
>py bla.py 225 2 => 5 5916 76 5853950
>py bla.py 225 3 => 5 8200 76 55552
Weird stuff (n**2,n) has a giant amount of memort allocated for it in the shared type, while not n**3 or (n,n,n). But that is besides the point.
a mallocs are consistent and only slightly depend on dimension and not at all on n (for the numbers tested).
b mallocs besides being high on shape 2, increase with n slightly as well, but with shape they vary wildly.
- The segmentation faults occurs in cycles! Memory allocation for shape
(n,n,n) on my machine approaches some n dependent number before sefault - but for n+1 we are OK again. Seems to be ~46k around 160 and ~56k around 224.
No good explanation from me, but the dependence on n makes me think the allocations need to fit into some bit structure nicely, and sometimes this breaks.
I am guessing using 225 for your dimensions will work - as a workaround.
MMAP.np.zeros((224,224), dtype=np.float32)make any difference? It seems like afloat32vsfloat64(the default size for standard CPythonfloat) mismatch is happening somewhere, so that may be enough to "fix" it.(1, 36861)and segments at(1, 36862)whilst unsurprisingly the following works at float32(1, 73722)and fails at(1, 73723).