I'm trying to use torch.sparse.spsolve to solve a linear system of equations as follows:
A_sparse = torch.sparse_coo_tensor(indices, values, size=(eq_counter, self.num_regions))
A_sparse_csr = A_sparse.to_sparse_csr()
A_sparse_csr = A_sparse_csr.cuda()
# Create dense vector b
b = torch.tensor(b_values, dtype=slopes.dtype, device=slopes.device)
b = b.cuda()
# Solve the linear system A c = b using torch.sparse.spsolve
intercepts = torch.sparse.spsolve(A_sparse_csr, b) # Shape: (num_regions,)
However, I get the following strange error --> RuntimeError: Calling linear solver with sparse tensors requires compiling PyTorch with CUDA cuDSS and is not supported in ROCm build.
I have an Nvidia RTX 40 series graphics card, so I don't understand why the ROCm build (related to AMD GPUs) is even relevant/showing up in the error? I tried to diagnose the error with the following code snippet:
print(f"PyTorch Version: {torch.__version__}")
print(f"CUDA Version: {torch.version.cuda}")
print(f"Is CUDA Available: {torch.cuda.is_available()}")
# Check the current CUDA device
if torch.cuda.is_available():
print(f"Current CUDA Device: {torch.cuda.current_device()}")
print(f"Device Name: {torch.cuda.get_device_name(torch.cuda.current_device())}")
The output was as expected:
PyTorch Version: 2.5.1+cu124
CUDA Version: 12.4
Is CUDA Available: True
Current CUDA Device: 0
Device Name: NVIDIA GeForce RTX 4090
So I really don't know what the issue is and any help in resolving this would be much appreciated! Thanks in advance