I want to compute the Kronecker product of a long list of small matrices (say, Pauli matrices). I tried to define the small matrices using scipy.sparse.csr_array, and use scipy.sparse.kron to perform the Kronecker product. However, the performance isn't ideal.
import numpy as np
from scipy import sparse
import functools
import time
sigma_x = sparse.csr_array(np.array([[0, 1], [1, 0]]))
sigma_y = sparse.csr_array(np.array([[0, -1j], [1j, 0]]))
sigma_z = sparse.csr_array(np.array([[1., 0], [0, -1.]]))
sigma = [sigma_x, sigma_y, sigma_z]
arguments = [sigma[i % 3] for i in range(3 * 5)]
start_time = time.time()
result = functools.reduce(sparse.kron, arguments)
end_time = time.time()
execution_time = end_time - start_time
print(execution_time)
=== Update 2 ===
Found a simple fix: I should use format='csr' keyword when calling sparse.kron. So I define
def kron_csr(x, y):
return sparse.kron(x, y, format='csr')
and then
arguments = [sigma[i % 3] for i in range(3 * 5)]
result = functools.reduce(kron_csr, arguments)
would yield the result in just 0.005 second.
=== Update 1 ===
Following a comment below, I split the computation into two steps, first compute the Kronecker product of 3 * 4 Pauli matrices, then compute the Kronecker product of that result with the last 3 Pauli matrices,
start_time = time.time()
arguments = [sigma[i % 3] for i in range(3 * 4)] # reduces to 3 * 4
first = functools.reduce(sparse.kron, arguments)
second = functools.reduce(sparse.kron, [sigma_x, sigma_y, sigma_z])
result = functools.reduce(sparse.kron, [first, second])
end_time = time.time()
execution_time = end_time - start_time
print(execution_time)
or first compute the Kronecker product of sigma_x, sigma_y, sigma_z, then compute the Kronecker product of five of this intermediate matrices,
start_time = time.time()
result_1 = functools.reduce(sparse.kron, sigma)
result = functools.reduce(sparse.kron, [result_1 for i in range(5)])
end_time = time.time()
execution_time = end_time - start_time
The performance improves to around 4~9 seconds.
The execution time gives something like 11 seconds. The same computatoin using Mathematica only takes around 0.01 second,
Sigma = {SparseArray[( {
{0, 1},
{1, 0}
} )], SparseArray[( {
{0, -I},
{I, 0}
} )], SparseArray[( {
{1, 0},
{0, -1}
} )]};
((KroneckerProduct @@
Join @@ Table[{Sigma[[1]], Sigma[[2]], Sigma[[3]]}, {i, 5}]) //
Timing)[[1]]
I wonder how to improve the python code performence (hopefully to something like that in Mathematica)?
np.arraytakes around the same time, 11 seconds. So sparse or not in python doesn't make much of a difference in this example. But in mathematica Sparse drastically improve performance.functools.reduce(sparse.kron, arguments[:i]), the array increases by 4x for each increase ini, and has a consistent sparsity of 0.5. Evaluation time tends to increase by 3.5x for each step. By 15, you are getting a result in the Gb sizes. (` 2**30 ~ 1G` )