numpy_zeros performs slightly better for smaller arrays and much better for larger arrays as shown below
import copy
import numpy as np
def deep_copy():
elevation_arr = np.zeros([900, 1600], np.float32)
climate_arr = copy.deepcopy(elevation_arr)
rainfall_arr = copy.deepcopy(elevation_arr)
return
def numpy_zeros():
elevation_arr = np.zeros([900, 1600], np.float32)
climate_arr = np.zeros([900, 1600], np.float32)
rainfall_arr = np.zeros([900, 1600], np.float32)
return
%timeit deep_copy()
# 4.13 ms ± 585 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
%timeit numpy_zeros()
# 3.01 ms ± 195 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
For a 10000 x 10000 array, following are the timings. numpy_zeros simply outperforms
%timeit deep_copy()
# 569 ms ± 50 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit numpy_zeros()
# 15.6 µs ± 1.38 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
copy..deepcopy()? I thinkelevation_arr.copy()would anyways be equivalent to a deep copy, since the elements ofelevation_arrare immutable ints.float32s. But immutable, nevertheless, as areints. My point is that, given an aggregate data structure (in this case numpy array) consisting of many objects, a shallow copy is good enough if the objects in the aggregation are all immutable. Only when the objects are mutable, it becomes important to make the copy a deep one, by making copies of the elements themselves.copy.copy(arr),copy.deepcopy(arr),arr.copy(),np.array(arr, copy=True)all time the same.deepcopyis only significantly different if the array has object dtype (same as if a list contains lists or dictionaries).