@isaacdevlugt I think the cuquantum is installed well:
cuquantum-python-cu11 23.3.0
And I change my code to try to fix my bugs:
import torch
import pennylane as qml
import warnings
warnings.filterwarnings('ignore')
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
dev = qml.device("lightning.gpu", wires=2, batch_obs=True)
@qml.batch_params
@qml.qnode(dev, interface='torch', diff_method='adjoint')
def circuit(theta):
qml.RX(theta, wires=1)
qml.PauliZ(wires=0)
qml.CNOT(wires=[0, 1])
return qml.expval(qml.Identity(wires=[0,1]))
print('circuit device:', circuit.device)
def get_U(theta):
matrix = qml.matrix(circuit)(theta)
return matrix
params = torch.tensor(torch.rand(5)).to(device)
print('input params:', params.device)
u = get_U(torch.tensor(params,requires_grad=True))
print(u)
print('u device', u.device)
I got new errors , and this is what I exactly want to find out:
Traceback (most recent call last):
File "/home/--/anaconda3/lib/python3.10/site-packages/pennylane/ops/functions/matrix.py", line 141, in matrix
return op.matrix(wire_order=wire_order)
File "/home/--/anaconda3/lib/python3.10/site-packages/pennylane/operation.py", line 780, in matrix
return expand_matrix(canonical_matrix, wires=self.wires, wire_order=wire_order)
File "/home/--/anaconda3/lib/python3.10/site-packages/pennylane/math/matrix_manipulation.py", line 171, in expand_matrix
expanded_batch_matrices = [reduce(kron_interface, mats) for mats in mats_list]
File "/home/--/anaconda3/lib/python3.10/site-packages/pennylane/math/matrix_manipulation.py", line 171, in <listcomp>
expanded_batch_matrices = [reduce(kron_interface, mats) for mats in mats_list]
File "/home/--/anaconda3/lib/python3.10/site-packages/pennylane/math/matrix_manipulation.py", line 129, in kron_interface
return qml.math.kron(mat1, mat2, like=interface)
File "/home/--/anaconda3/lib/python3.10/site-packages/pennylane/math/multi_dispatch.py", line 151, in wrapper
return fn(*args, **kwargs)
File "/home/--/anaconda3/lib/python3.10/site-packages/pennylane/math/multi_dispatch.py", line 163, in kron
return ar.numpy.kron(*args, like=like, **kwargs)
File "/home/--/.local/lib/python3.10/site-packages/autoray/autoray.py", line 79, in do
return get_lib_fn(backend, fn)(*args, **kwargs)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/--/test_matrix.py", line 27, in <module>
u = get_U(torch.tensor(params,requires_grad=True))
File "/home/--/test_matrix.py", line 21, in get_U
matrix = qml.matrix(circuit)(theta)
File "/home/--/anaconda3/lib/python3.10/site-packages/pennylane/qnode.py", line 1027, in __call__
res = qml.execute(
File "/home/--/anaconda3/lib/python3.10/site-packages/pennylane/interfaces/execution.py", line 612, in execute
return post_processing(tapes)
File "/home/--/anaconda3/lib/python3.10/site-packages/pennylane/interfaces/execution.py", line 609, in post_processing
return program_post_processing(program_pre_processing(results))
File "/home/--/anaconda3/lib/python3.10/site-packages/pennylane/transforms/core/transform_program.py", line 86, in _apply_postprocessing_stack
results = postprocessing(results)
File "/home/--/anaconda3/lib/python3.10/site-packages/pennylane/transforms/core/transform_program.py", line 56, in _batch_postprocessing
return tuple(fn(results[sl]) for fn, sl in zip(individual_fns, slices))
File "/home/--/anaconda3/lib/python3.10/site-packages/pennylane/transforms/core/transform_program.py", line 56, in <genexpr>
return tuple(fn(results[sl]) for fn, sl in zip(individual_fns, slices))
File "/home/--/anaconda3/lib/python3.10/site-packages/pennylane/ops/functions/matrix.py", line 172, in processing_fn
result = matrix(res[0].operations[0], wire_order=wire_order)
File "/home/--/anaconda3/lib/python3.10/site-packages/pennylane/ops/functions/matrix.py", line 143, in matrix
return matrix(op.expand(), wire_order=wire_order)
File "/home/--/anaconda3/lib/python3.10/site-packages/pennylane/operation.py", line 1425, in expand
raise DecompositionUndefinedError
pennylane.operation.DecompositionUndefinedError
The error information tells me that I use two devices: cuda and CPU. I don’t know how this bug happened:smiling_face_with_tear:, I have sent the variable to the cuda device params = torch.tensor(torch.rand(5)).to(device)
. The only way I find out is to put everything in the CPU, but it’s not what I want. Do you have any insight?