The simplest and most Pythonic solution is simply to use sum(), like so:
sum(list_of_arrays)
This may even be faster than other options, at least under some conditions (thanks to Loc Quan for pointing this out!). For example, on my laptop, with Python 3.10.2 and NumPy 2.1.3, I ran the following code:
n = 100000; m = 5; k = 5;
list_of_arrays = [np.random.random_sample((n,m)) for _ in range(k)]
%timeit np.sum(list_of_arrays, axis=0)
7.29 ms ± 459 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)
%timeit np.add.reduce(list_of_arrays)
7.04 ms ± 134 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)
%timeit reduce(np.add, list_of_arrays)
2.04 ms ± 86.2 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)
%timeit sum(list_of_arrays)
1.59 ms ± 44.9 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
Showing a huge advantage to sum() over np.sum() and np.add.reduce(), and a more modest but still significant speedup vs. reduce(np.add, ...).
np.sum(list_of_arrays, axis=0)should work. Ornp.add.reduce(list_of_arrays).