Gradients computing fail for evolution under a hamiltonian

Hello, I’m trying to implement quantum ML model with evolution under a hamiltonian.
exp(-iT H)
with
H=\sum_{j=1}^N a_j X_j+\sum_{j=1}^N \sum_{k=1}^{j-1} J_{j k} Z_j Z_k

I want to optimize hamiltonian coefficients a_j and J_{j k} but it doesn’t seem to work.

I tried with torch engine and got error because of some converting of coefficient tensor to numpy with is not possible with gradient requiring tensors.
And also tried with pure pennylane implementation but it also failes with some error.

Is it not yet supported by pennylane operation or I just use something incorrectly?

Code and error messages:

torch

import pennylane as qml
from pennylane import numpy as np

import matplotlib.pyplot as plt
import torch
import torch.optim as optim

n_qubits = 3
device = qml.device("default.qubit", wires=n_qubits)

obs = [qml.PauliX(i) for i in range(n_qubits)] + [qml.PauliZ(i)@qml.PauliZ(j) for i in range(n_qubits) for j in range(i+1, n_qubits)]

@qml.qnode(device)
def model_qnode(inputs, coeffs):
    for i in range(n_qubits):
        qml.RX(inputs, wires=i)

    H = qml.Hamiltonian(coeffs, obs)
    qml.exp(H, 1j)

    return qml.expval(qml.PauliZ(0))


def model_class():
    return qml.qnn.TorchLayer(model_qnode, {"coeffs":[n_qubits + n_qubits*(n_qubits-1)//2]})


model = model_class()
optimizer = optim.Adam(model.parameters(), lr=0.1)
criterion = torch.nn.MSELoss()
hist = []
for i in range(50):
    optimizer.zero_grad()

    pred = model(torch.tensor([0.0]))

    loss = criterion(pred, torch.tensor([1.0]))
    hist.append(loss.item())
    loss.backward()
    optimizer.step()

plt.plot(hist)
plt.show()

Error message

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
DecompositionUndefinedError               Traceback (most recent call last)
File ~/miniconda3/envs/qml_torch/lib/python3.10/site-packages/pennylane/transforms/op_transforms.py:264, in op_transform.fn(self, obj, *args, **kwargs)
    261 try:
    262     # attempt to decompose the operation and call
    263     # the tape transform function if defined
--> 264     return self.tape_fn(obj.expand(), *args, **kwargs)
    266 except (
    267     AttributeError,
    268     qml.operation.OperatorPropertyUndefined,
   (...)
    271     # if obj.expand() does not exist, a required operation property was not found,
    272     # or the tape transform function does not exist, simply raise the original exception

File ~/miniconda3/envs/qml_torch/lib/python3.10/site-packages/pennylane/operation.py:1444, in Operator.expand(self)
   1443 if not self.has_decomposition:
-> 1444     raise DecompositionUndefinedError
   1446 qscript = qml.tape.QuantumScript(self.decomposition())

DecompositionUndefinedError: 

The above exception was the direct cause of the following exception:

RuntimeError                              Traceback (most recent call last)
     32 for i in range(50):
     33     optimizer.zero_grad()
---> 35     pred = model(torch.tensor([0.0]))
     37     loss = criterion(pred, torch.tensor([1.0]))
     38     hist.append(loss.item())

File ~/miniconda3/envs/qml_torch/lib/python3.10/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs)
   1496 # If we don't have any hooks, we want to skip the rest of the logic in
   1497 # this function, and just call forward.
   1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
   1499         or _global_backward_pre_hooks or _global_backward_hooks
   1500         or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501     return forward_call(*args, **kwargs)
   1502 # Do not call functions when jit is used
   1503 full_backward_hooks, non_full_backward_hooks = [], []

File ~/miniconda3/envs/qml_torch/lib/python3.10/site-packages/pennylane/qnn/torch.py:408, in TorchLayer.forward(self, inputs)
    405     results = torch.stack(reconstructor)
    406 else:
    407     # calculate the forward pass as usual
--> 408     results = self._evaluate_qnode(inputs)
    410 # reshape to the correct number of batch dims
    411 if has_batch_dim:

File ~/miniconda3/envs/qml_torch/lib/python3.10/site-packages/pennylane/qnn/torch.py:429, in TorchLayer._evaluate_qnode(self, x)
    417 """Evaluates the QNode for a single input datapoint.
    418 
    419 Args:
   (...)
    423     tensor: output datapoint
    424 """
    425 kwargs = {
    426     **{self.input_arg: x},
    427     **{arg: weight.to(x) for arg, weight in self.qnode_weights.items()},
    428 }
--> 429 res = self.qnode(**kwargs)
    431 if isinstance(res, torch.Tensor):
    432     return res.type(x.dtype)

File ~/miniconda3/envs/qml_torch/lib/python3.10/site-packages/pennylane/qnode.py:989, in QNode.__call__(self, *args, **kwargs)
    986     self.execute_kwargs.pop("mode")
    988 # pylint: disable=unexpected-keyword-arg
--> 989 res = qml.execute(
    990     (self._tape,),
    991     device=self.device,
    992     gradient_fn=self.gradient_fn,
    993     interface=self.interface,
    994     transform_program=self.transform_program,
    995     gradient_kwargs=self.gradient_kwargs,
    996     override_shots=override_shots,
    997     **self.execute_kwargs,
    998 )
   1000 res = res[0]
   1002 # convert result to the interface in case the qfunc has no parameters

File ~/miniconda3/envs/qml_torch/lib/python3.10/site-packages/pennylane/interfaces/execution.py:636, in execute(tapes, device, gradient_fn, interface, transform_program, grad_on_execution, gradient_kwargs, cache, cachesize, max_diff, override_shots, expand_fn, max_expansion, device_batch_transform)
    634 # Exiting early if we do not need to deal with an interface boundary
    635 if no_interface_boundary_required:
--> 636     results = inner_execute(tapes)
    637     results = batch_fn(results)
    638     return program_post_processing(results)

File ~/miniconda3/envs/qml_torch/lib/python3.10/site-packages/pennylane/interfaces/execution.py:255, in _make_inner_execute.<locals>.inner_execute(tapes, **_)
    253 if numpy_only:
    254     tapes = tuple(qml.transforms.convert_to_numpy_parameters(t) for t in tapes)
--> 255 return cached_device_execution(tapes)

File ~/miniconda3/envs/qml_torch/lib/python3.10/site-packages/pennylane/interfaces/execution.py:377, in cache_execute.<locals>.wrapper(tapes, **kwargs)
    372         return (res, []) if return_tuple else res
    374 else:
    375     # execute all unique tapes that do not exist in the cache
    376     # convert to list as new device interface returns a tuple
--> 377     res = list(fn(tuple(execution_tapes.values()), **kwargs))
    379 final_res = []
    381 for i, tape in enumerate(tapes):

File ~/miniconda3/envs/qml_torch/lib/python3.10/contextlib.py:79, in ContextDecorator.__call__.<locals>.inner(*args, **kwds)
     76 @wraps(func)
     77 def inner(*args, **kwds):
     78     with self._recreate_cm():
---> 79         return func(*args, **kwds)

File ~/miniconda3/envs/qml_torch/lib/python3.10/site-packages/pennylane/_qubit_device.py:629, in QubitDevice.batch_execute(self, circuits)
    624 for circuit in circuits:
    625     # we need to reset the device here, else it will
    626     # not start the next computation in the zero state
    627     self.reset()
--> 629     res = self.execute(circuit)
    630     results.append(res)
    632 if self.tracker.active:

File ~/miniconda3/envs/qml_torch/lib/python3.10/site-packages/pennylane/devices/default_qubit_torch.py:247, in DefaultQubitTorch.execute(self, circuit, **kwargs)
    239         if params_cuda_device != specified_device_cuda:
    240             warnings.warn(
    241                 f"Torch device {self._torch_device} specified "
    242                 "upon PennyLane device creation does not match the "
    243                 "Torch device of the gate parameters; "
    244                 f"{self._torch_device} will be used."
    245             )
--> 247 return super().execute(circuit, **kwargs)

File ~/miniconda3/envs/qml_torch/lib/python3.10/site-packages/pennylane/_qubit_device.py:337, in QubitDevice.execute(self, circuit, **kwargs)
    334 self.check_validity(circuit.operations, circuit.observables)
    336 # apply all circuit operations
--> 337 self.apply(circuit.operations, rotations=self._get_diagonalizing_gates(circuit), **kwargs)
    339 # generate computational basis samples
    340 if self.shots is not None or circuit.is_sampled:

File ~/miniconda3/envs/qml_torch/lib/python3.10/site-packages/pennylane/devices/default_qubit.py:294, in DefaultQubit.apply(self, operations, rotations, **kwargs)
    292         self._state = self._apply_parametrized_evolution(self._state, operation)
    293     else:
--> 294         self._state = self._apply_operation(self._state, operation)
    296 # store the pre-rotated state
    297 self._pre_rotated_state = self._state

File ~/miniconda3/envs/qml_torch/lib/python3.10/site-packages/pennylane/devices/default_qubit.py:334, in DefaultQubit._apply_operation(self, state, operation)
    331     axes = [ax + shift for ax in self.wires.indices(wires)]
    332     return self._apply_ops[operation.name](state, axes)
--> 334 matrix = self._asarray(self._get_unitary_matrix(operation), dtype=self.C_DTYPE)
    336 if operation in diagonal_in_z_basis:
    337     return self._apply_diagonal_unitary(state, matrix, wires)

File ~/miniconda3/envs/qml_torch/lib/python3.10/site-packages/pennylane/devices/default_qubit_torch.py:319, in DefaultQubitTorch._get_unitary_matrix(self, unitary)
    317 if unitary in diagonal_in_z_basis:
    318     return self._asarray(unitary.eigvals(), dtype=self.C_DTYPE)
--> 319 return self._asarray(unitary.matrix(), dtype=self.C_DTYPE)

File ~/miniconda3/envs/qml_torch/lib/python3.10/site-packages/pennylane/ops/op_math/exp.py:393, in Exp.matrix(self, wire_order)
    387     except OperatorPropertyUndefined:
    388         warn(
    389             f"The autograd matrix for {self} is not differentiable. "
    390             "Use a different interface if you need backpropagation.",
    391             UserWarning,
    392         )
--> 393 return super().matrix(wire_order=wire_order)

File ~/miniconda3/envs/qml_torch/lib/python3.10/site-packages/pennylane/ops/op_math/symbolicop.py:239, in ScalarSymbolicOp.matrix(self, wire_order)
    237 # compute base matrix
    238 if isinstance(self.base, qml.Hamiltonian):
--> 239     base_matrix = qml.matrix(self.base)
    240 else:
    241     base_matrix = self.base.matrix()

File ~/miniconda3/envs/qml_torch/lib/python3.10/site-packages/pennylane/transforms/op_transforms.py:213, in op_transform.__call__(self, *targs, **tkwargs)
    210     obj, *targs = targs
    212 if isinstance(obj, (qml.operation.Operator, qml.tape.QuantumScript)) or callable(obj):
--> 213     return self._create_wrapper(obj, *targs, **tkwargs)
    215 # Input is not an operator nor a QNode nor a quantum tape nor a qfunc.
    216 # Assume Python decorator syntax:
    217 #
   (...)
    229 # Prepend the input to the transform args,
    230 # and create a wrapper function.
    231 if obj is not None:

File ~/miniconda3/envs/qml_torch/lib/python3.10/site-packages/pennylane/transforms/op_transforms.py:410, in op_transform._create_wrapper(self, obj, wire_order, *targs, **tkwargs)
    407     if wire_order is not None:
    408         tkwargs["wire_order"] = wire_order
--> 410     wrapper = self.fn(obj, *targs, **tkwargs)
    412 elif isinstance(obj, qml.tape.QuantumScript):
    413     # Input is a quantum tape. Get the quantum tape.
    414     tape, verified_wire_order = self._make_tape(obj, wire_order)

File ~/miniconda3/envs/qml_torch/lib/python3.10/site-packages/pennylane/transforms/op_transforms.py:273, in op_transform.fn(self, obj, *args, **kwargs)
    264     return self.tape_fn(obj.expand(), *args, **kwargs)
    266 except (
    267     AttributeError,
    268     qml.operation.OperatorPropertyUndefined,
   (...)
    271     # if obj.expand() does not exist, a required operation property was not found,
    272     # or the tape transform function does not exist, simply raise the original exception
--> 273     raise e1 from e

File ~/miniconda3/envs/qml_torch/lib/python3.10/site-packages/pennylane/transforms/op_transforms.py:258, in op_transform.fn(self, obj, *args, **kwargs)
    240 """Evaluate the underlying operator transform function.
    241 
    242 If a corresponding tape transform for the operator has been registered
   (...)
    255     any: the result of evaluating the transform
    256 """
    257 try:
--> 258     return self._fn(obj, *args, **kwargs)
    260 except Exception as e1:  # pylint: disable=broad-except
    261     try:
    262         # attempt to decompose the operation and call
    263         # the tape transform function if defined

File ~/miniconda3/envs/qml_torch/lib/python3.10/site-packages/pennylane/ops/functions/matrix.py:125, in matrix(op, wire_order)
    122     op = 1.0 * op  # convert to a Hamiltonian
    124 if isinstance(op, qml.Hamiltonian):
--> 125     return op.sparse_matrix(wire_order=wire_order).toarray()
    127 return op.matrix(wire_order=wire_order)

File ~/miniconda3/envs/qml_torch/lib/python3.10/site-packages/pennylane/ops/qubit/hamiltonian.py:400, in Hamiltonian.sparse_matrix(self, wire_order)
    397 n = len(wires)
    398 matrix = scipy.sparse.csr_matrix((2**n, 2**n), dtype="complex128")
--> 400 coeffs = qml.math.toarray(self.data)
    402 temp_mats = []
    403 for coeff, op in zip(coeffs, self.ops):

File ~/miniconda3/envs/qml_torch/lib/python3.10/site-packages/autoray/autoray.py:80, in do(fn, like, *args, **kwargs)
     31 """Do function named ``fn`` on ``(*args, **kwargs)``, peforming single
     32 dispatch to retrieve ``fn`` based on whichever library defines the class of
     33 the ``args[0]``, or the ``like`` keyword argument if specified.
   (...)
     77     <tf.Tensor: id=91, shape=(3, 3), dtype=float32>
     78 """
     79 backend = choose_backend(fn, *args, like=like, **kwargs)
---> 80 return get_lib_fn(backend, fn)(*args, **kwargs)

File ~/miniconda3/envs/qml_torch/lib/python3.10/site-packages/autoray/autoray.py:1415, in numpy_to_numpy(x)
   1414 def numpy_to_numpy(x):
-> 1415     return do("asarray", x, like="numpy")

File ~/miniconda3/envs/qml_torch/lib/python3.10/site-packages/autoray/autoray.py:80, in do(fn, like, *args, **kwargs)
     31 """Do function named ``fn`` on ``(*args, **kwargs)``, peforming single
     32 dispatch to retrieve ``fn`` based on whichever library defines the class of
     33 the ``args[0]``, or the ``like`` keyword argument if specified.
   (...)
     77     <tf.Tensor: id=91, shape=(3, 3), dtype=float32>
     78 """
     79 backend = choose_backend(fn, *args, like=like, **kwargs)
---> 80 return get_lib_fn(backend, fn)(*args, **kwargs)

File ~/miniconda3/envs/qml_torch/lib/python3.10/site-packages/torch/_tensor.py:970, in Tensor.__array__(self, dtype)
    968     return handle_torch_function(Tensor.__array__, (self,), self, dtype=dtype)
    969 if dtype is None:
--> 970     return self.numpy()
    971 else:
    972     return self.numpy().astype(dtype, copy=False)

RuntimeError: Can't call numpy() on Tensor that requires grad. Use tensor.detach().numpy() instead.

pure pennylane implementation

import pennylane as qml
from pennylane import numpy as np

import matplotlib.pyplot as plt

n_qubits = 3

device = qml.device("default.qubit", wires=n_qubits)

obs = [qml.PauliX(i) for i in range(n_qubits)] + [qml.PauliZ(i)@qml.PauliZ(j) for i in range(n_qubits) for j in range(i+1, n_qubits)]

coeffs = np.random.rand(n_qubits + n_qubits*(n_qubits-1)//2)*np.pi*2
coeffs.requires_grad = True

@qml.qnode(device)
def model(inputs, coeffs):
    for i in range(n_qubits):
        qml.RX(inputs, wires=i)

    H = qml.Hamiltonian(coeffs, obs)
    qml.exp(H, 1j)

    return qml.expval(qml.PauliZ(0))


optimizer = qml.AdamOptimizer()

def cost(coeffs):
    pred = model(np.array([0.0], requires_grad=False), coeffs)
    loss = (pred - np.array([1.0]))**2
    return loss 

for i in range(50):
  
    coeffs = optimizer.step(cost, coeffs)

Error:

---------------------------------------------------------------------------
DecompositionUndefinedError               Traceback (most recent call last)
File ~/miniconda3/envs/qml_torch/lib/python3.10/site-packages/pennylane/transforms/op_transforms.py:264, in op_transform.fn(self, obj, *args, **kwargs)
    261 try:
    262     # attempt to decompose the operation and call
    263     # the tape transform function if defined
--> 264     return self.tape_fn(obj.expand(), *args, **kwargs)
    266 except (
    267     AttributeError,
    268     qml.operation.OperatorPropertyUndefined,
   (...)
    271     # if obj.expand() does not exist, a required operation property was not found,
    272     # or the tape transform function does not exist, simply raise the original exception

File ~/miniconda3/envs/qml_torch/lib/python3.10/site-packages/pennylane/operation.py:1444, in Operator.expand(self)
   1443 if not self.has_decomposition:
-> 1444     raise DecompositionUndefinedError
   1446 qscript = qml.tape.QuantumScript(self.decomposition())

DecompositionUndefinedError: 

The above exception was the direct cause of the following exception:

AttributeError                            Traceback (most recent call last)
32     return loss 
34     for i in range(50):
---> 36     coeffs = optimizer.step(cost, coeffs)
     37     hist.append(loss)
     39     plt.plot(hist)

File ~/miniconda3/envs/qml_torch/lib/python3.10/site-packages/pennylane/optimize/gradient_descent.py:88, in GradientDescentOptimizer.step(self, objective_fn, grad_fn, *args, **kwargs)
     70 def step(self, objective_fn, *args, grad_fn=None, **kwargs):
     71     \"\"\"Update trainable arguments with one step of the optimizer.
     72 
     73     Args:
   (...)
     85         If single arg is provided, list [array] is replaced by array.
     86     \"\"\"
---> 88     g, _ = self.compute_grad(objective_fn, args, kwargs, grad_fn=grad_fn)
     89     new_args = self.apply_grad(g, args)
     91     # unwrap from list if one argument, cleaner return

File ~/miniconda3/envs/qml_torch/lib/python3.10/site-packages/pennylane/optimize/gradient_descent.py:117, in GradientDescentOptimizer.compute_grad(objective_fn, args, kwargs, grad_fn)
     99 r\"\"\"Compute gradient of the objective function at the given point and return it along with
    100 the objective function forward pass (if available).
    101 
   (...)
    114     will not be evaluted and instead ``None`` will be returned.
    115 \"\"\"
    116 g = get_gradient(objective_fn) if grad_fn is None else grad_fn
--> 117 grad = g(*args, **kwargs)
    118 forward = getattr(g, \"forward\", None)
    120 num_trainable_args = sum(getattr(arg, \"requires_grad\", False) for arg in args)

File ~/miniconda3/envs/qml_torch/lib/python3.10/site-packages/pennylane/_grad.py:120, in grad.__call__(self, *args, **kwargs)
    117     self._forward = self._fun(*args, **kwargs)
    118     return ()
--> 120 grad_value, ans = grad_fn(*args, **kwargs)  # pylint: disable=not-callable
    121 self._forward = ans
    123 return grad_value

File ~/miniconda3/envs/qml_torch/lib/python3.10/site-packages/autograd/wrap_util.py:20, in unary_to_nary.<locals>.nary_operator.<locals>.nary_f(*args, **kwargs)
     18 else:
     19     x = tuple(args[i] for i in argnum)
---> 20 return unary_operator(unary_f, x, *nary_op_args, **nary_op_kwargs)

File ~/miniconda3/envs/qml_torch/lib/python3.10/site-packages/pennylane/_grad.py:138, in grad._grad_with_forward(fun, x)
    132 @staticmethod
    133 @unary_to_nary
    134 def _grad_with_forward(fun, x):
    135     \"\"\"This function is a replica of ``autograd.grad``, with the only
    136     difference being that it returns both the gradient *and* the forward pass
    137     value.\"\"\"
--> 138     vjp, ans = _make_vjp(fun, x)
    140     if not vspace(ans).size == 1:
    141         raise TypeError(
    142             \"Grad only applies to real scalar-output functions. \"
    143             \"Try jacobian, elementwise_grad or holomorphic_grad.\"
    144         )

File ~/miniconda3/envs/qml_torch/lib/python3.10/site-packages/autograd/core.py:10, in make_vjp(fun, x)
      8 def make_vjp(fun, x):
      9     start_node = VJPNode.new_root()
---> 10     end_value, end_node =  trace(start_node, fun, x)
     11     if end_node is None:
     12         def vjp(g): return vspace(x).zeros()

File ~/miniconda3/envs/qml_torch/lib/python3.10/site-packages/autograd/tracer.py:10, in trace(start_node, fun, x)
      8 with trace_stack.new_trace() as t:
      9     start_box = new_box(x, t, start_node)
---> 10     end_box = fun(start_box)
     11     if isbox(end_box) and end_box._trace == start_box._trace:
     12         return end_box._value, end_box._node

File ~/miniconda3/envs/qml_torch/lib/python3.10/site-packages/autograd/wrap_util.py:15, in unary_to_nary.<locals>.nary_operator.<locals>.nary_f.<locals>.unary_f(x)
     13 else:
     14     subargs = subvals(args, zip(argnum, x))
---> 15 return fun(*subargs, **kwargs)


     <a >29</a> def cost(coeffs):
---> <a >30</a>     pred = model(np.array([0.0], requires_grad=False), coeffs)
     <a >31</a>     loss = (pred - np.array([1.0]))**2
     <a >32</a>     return loss

File ~/miniconda3/envs/qml_torch/lib/python3.10/site-packages/pennylane/qnode.py:989, in QNode.__call__(self, *args, **kwargs)
    986     self.execute_kwargs.pop(\"mode\")
    988 # pylint: disable=unexpected-keyword-arg
--> 989 res = qml.execute(
    990     (self._tape,),
    991     device=self.device,
    992     gradient_fn=self.gradient_fn,
    993     interface=self.interface,
    994     transform_program=self.transform_program,
    995     gradient_kwargs=self.gradient_kwargs,
    996     override_shots=override_shots,
    997     **self.execute_kwargs,
    998 )
   1000 res = res[0]
   1002 # convert result to the interface in case the qfunc has no parameters

File ~/miniconda3/envs/qml_torch/lib/python3.10/site-packages/pennylane/interfaces/execution.py:636, in execute(tapes, device, gradient_fn, interface, transform_program, grad_on_execution, gradient_kwargs, cache, cachesize, max_diff, override_shots, expand_fn, max_expansion, device_batch_transform)
    634 # Exiting early if we do not need to deal with an interface boundary
    635 if no_interface_boundary_required:
--> 636     results = inner_execute(tapes)
    637     results = batch_fn(results)
    638     return program_post_processing(results)

File ~/miniconda3/envs/qml_torch/lib/python3.10/site-packages/pennylane/interfaces/execution.py:255, in _make_inner_execute.<locals>.inner_execute(tapes, **_)
    253 if numpy_only:
    254     tapes = tuple(qml.transforms.convert_to_numpy_parameters(t) for t in tapes)
--> 255 return cached_device_execution(tapes)

File ~/miniconda3/envs/qml_torch/lib/python3.10/site-packages/pennylane/interfaces/execution.py:377, in cache_execute.<locals>.wrapper(tapes, **kwargs)
    372         return (res, []) if return_tuple else res
    374 else:
    375     # execute all unique tapes that do not exist in the cache
    376     # convert to list as new device interface returns a tuple
--> 377     res = list(fn(tuple(execution_tapes.values()), **kwargs))
    379 final_res = []
    381 for i, tape in enumerate(tapes):

File ~/miniconda3/envs/qml_torch/lib/python3.10/contextlib.py:79, in ContextDecorator.__call__.<locals>.inner(*args, **kwds)
     76 @wraps(func)
     77 def inner(*args, **kwds):
     78     with self._recreate_cm():
---> 79         return func(*args, **kwds)

File ~/miniconda3/envs/qml_torch/lib/python3.10/site-packages/pennylane/_qubit_device.py:629, in QubitDevice.batch_execute(self, circuits)
    624 for circuit in circuits:
    625     # we need to reset the device here, else it will
    626     # not start the next computation in the zero state
    627     self.reset()
--> 629     res = self.execute(circuit)
    630     results.append(res)
    632 if self.tracker.active:

File ~/miniconda3/envs/qml_torch/lib/python3.10/site-packages/pennylane/_qubit_device.py:337, in QubitDevice.execute(self, circuit, **kwargs)
    334 self.check_validity(circuit.operations, circuit.observables)
    336 # apply all circuit operations
--> 337 self.apply(circuit.operations, rotations=self._get_diagonalizing_gates(circuit), **kwargs)
    339 # generate computational basis samples
    340 if self.shots is not None or circuit.is_sampled:

File ~/miniconda3/envs/qml_torch/lib/python3.10/site-packages/pennylane/devices/default_qubit.py:294, in DefaultQubit.apply(self, operations, rotations, **kwargs)
    292         self._state = self._apply_parametrized_evolution(self._state, operation)
    293     else:
--> 294         self._state = self._apply_operation(self._state, operation)
    296 # store the pre-rotated state
    297 self._pre_rotated_state = self._state

File ~/miniconda3/envs/qml_torch/lib/python3.10/site-packages/pennylane/devices/default_qubit.py:334, in DefaultQubit._apply_operation(self, state, operation)
    331     axes = [ax + shift for ax in self.wires.indices(wires)]
    332     return self._apply_ops[operation.name](state, axes)
--> 334 matrix = self._asarray(self._get_unitary_matrix(operation), dtype=self.C_DTYPE)
    336 if operation in diagonal_in_z_basis:
    337     return self._apply_diagonal_unitary(state, matrix, wires)

File ~/miniconda3/envs/qml_torch/lib/python3.10/site-packages/pennylane/devices/default_qubit.py:674, in DefaultQubit._get_unitary_matrix(self, unitary)
    671 if unitary in diagonal_in_z_basis:
    672     return unitary.eigvals()
--> 674 return unitary.matrix()

File ~/miniconda3/envs/qml_torch/lib/python3.10/site-packages/pennylane/ops/op_math/exp.py:393, in Exp.matrix(self, wire_order)
    387     except OperatorPropertyUndefined:
    388         warn(
    389             f\"The autograd matrix for {self} is not differentiable. \"
    390             \"Use a different interface if you need backpropagation.\",
    391             UserWarning,
    392         )
--> 393 return super().matrix(wire_order=wire_order)

File ~/miniconda3/envs/qml_torch/lib/python3.10/site-packages/pennylane/ops/op_math/symbolicop.py:239, in ScalarSymbolicOp.matrix(self, wire_order)
    237 # compute base matrix
    238 if isinstance(self.base, qml.Hamiltonian):
--> 239     base_matrix = qml.matrix(self.base)
    240 else:
    241     base_matrix = self.base.matrix()

File ~/miniconda3/envs/qml_torch/lib/python3.10/site-packages/pennylane/transforms/op_transforms.py:213, in op_transform.__call__(self, *targs, **tkwargs)
    210     obj, *targs = targs
    212 if isinstance(obj, (qml.operation.Operator, qml.tape.QuantumScript)) or callable(obj):
--> 213     return self._create_wrapper(obj, *targs, **tkwargs)
    215 # Input is not an operator nor a QNode nor a quantum tape nor a qfunc.
    216 # Assume Python decorator syntax:
    217 #
   (...)
    229 # Prepend the input to the transform args,
    230 # and create a wrapper function.
    231 if obj is not None:

File ~/miniconda3/envs/qml_torch/lib/python3.10/site-packages/pennylane/transforms/op_transforms.py:410, in op_transform._create_wrapper(self, obj, wire_order, *targs, **tkwargs)
    407     if wire_order is not None:
    408         tkwargs[\"wire_order\"] = wire_order
--> 410     wrapper = self.fn(obj, *targs, **tkwargs)
    412 elif isinstance(obj, qml.tape.QuantumScript):
    413     # Input is a quantum tape. Get the quantum tape.
    414     tape, verified_wire_order = self._make_tape(obj, wire_order)

File ~/miniconda3/envs/qml_torch/lib/python3.10/site-packages/pennylane/transforms/op_transforms.py:273, in op_transform.fn(self, obj, *args, **kwargs)
    264     return self.tape_fn(obj.expand(), *args, **kwargs)
    266 except (
    267     AttributeError,
    268     qml.operation.OperatorPropertyUndefined,
   (...)
    271     # if obj.expand() does not exist, a required operation property was not found,
    272     # or the tape transform function does not exist, simply raise the original exception
--> 273     raise e1 from e

File ~/miniconda3/envs/qml_torch/lib/python3.10/site-packages/pennylane/transforms/op_transforms.py:258, in op_transform.fn(self, obj, *args, **kwargs)
    240 \"\"\"Evaluate the underlying operator transform function.
    241 
    242 If a corresponding tape transform for the operator has been registered
   (...)
    255     any: the result of evaluating the transform
    256 \"\"\"
    257 try:
--> 258     return self._fn(obj, *args, **kwargs)
    260 except Exception as e1:  # pylint: disable=broad-except
    261     try:
    262         # attempt to decompose the operation and call
    263         # the tape transform function if defined

File ~/miniconda3/envs/qml_torch/lib/python3.10/site-packages/pennylane/ops/functions/matrix.py:125, in matrix(op, wire_order)
    122     op = 1.0 * op  # convert to a Hamiltonian
    124 if isinstance(op, qml.Hamiltonian):
--> 125     return op.sparse_matrix(wire_order=wire_order).toarray()
    127 return op.matrix(wire_order=wire_order)

File ~/miniconda3/envs/qml_torch/lib/python3.10/site-packages/pennylane/ops/qubit/hamiltonian.py:439, in Hamiltonian.sparse_matrix(self, wire_order)
    433     mat.append(scipy.sparse.eye(2**i_count, format=\"coo\"))
    435 red_mat = (
    436     functools.reduce(lambda i, j: scipy.sparse.kron(i, j, format=\"coo\"), mat) * coeff
    437 )
--> 439 temp_mats.append(red_mat.tocsr())
    440 # Value of 100 arrived at empirically to balance time savings vs memory use. At this point
    441 # the `temp_mats` are summed into the final result and the temporary storage array is
    442 # cleared.
    443 if (len(temp_mats) % 100) == 0:

AttributeError: 'ArrayBox' object has no attribute 'tocsr'

output of qml.about()

Name: PennyLane
Version: 0.32.0
Summary: PennyLane is a Python quantum machine learning library by Xanadu Inc.
Home-page: https://github.com/PennyLaneAI/pennylane
Author: 
Author-email: 
License: Apache License 2.0
Location: /home/amir/miniconda3/envs/qml_torch/lib/python3.10/site-packages
Requires: appdirs, autograd, autoray, cachetools, networkx, numpy, pennylane-lightning, requests, rustworkx, scipy, semantic-version, toml, typing-extensions
Required-by: PennyLane-Lightning

Platform info:           Linux-5.15.90.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
Python version:          3.10.12
Numpy version:           1.23.5
Scipy version:           1.10.0
Installed devices:
- default.gaussian (PennyLane-0.32.0)
- default.mixed (PennyLane-0.32.0)
- default.qubit (PennyLane-0.32.0)
- default.qubit.autograd (PennyLane-0.32.0)
- default.qubit.jax (PennyLane-0.32.0)
- default.qubit.tf (PennyLane-0.32.0)
- default.qubit.torch (PennyLane-0.32.0)
- default.qutrit (PennyLane-0.32.0)
- null.qubit (PennyLane-0.32.0)
- lightning.qubit (PennyLane-Lightning-0.32.0)

Hello @Amir_Akhundzianov , great question!
That error is related to the construction of the Hamiltonian. If instead of writing:

you write:

qml.dot(coeffs, obs)

it should work. I have informed the team so that they can study the case and see if they can make the script compatible with qml.Hamiltonian as well.

Also, we careful with:

for i in range(n_qubits):
        qml.RX(inputs, wires=i)

Probably you want to put inputs[I].

I hope that helps :slight_smile:

1 Like

Thank you very much! This helped.

1 Like