RuntimeError when using batched input with default.mixed, backprop, and torch

Hi all!

I’m running into a RuntimeError when using PennyLane’s default.mixed device together with the torch interface and backprop differentiation. The error only occurs when passing a batched input tensor (shape [batch_size]), not for single inputs.

Here’s a fully self-contained example to reproduce:

import pennylane as qml
from pennylane.transforms import broadcast_expand
import torch

# Important: Use 'default.mixed' device to trigger the issue
dev = qml.device("default.mixed", wires=1)

# Imporatnt: Use 'backprop' as the differentiation method
@qml.qnode(dev, interface="torch", diff_method="backprop")
def circuit(x):
    qml.RX(x, wires=0)
    return qml.expval(qml.PauliZ(0))

# Important: Use input with batch dimension (here 2)
x = torch.tensor([0.3, 0.5], requires_grad=True)

# This resolves the issue
# circuit = broadcast_expand(circuit)

print(circuit(x))

Here is the full error message:

RuntimeError: Can't call numpy() on Tensor that requires grad. Use tensor.detach().numpy() instead.

Notes:

  • This does not happen with the default (non-mixed) device.
  • Using broadcast_expand as a workaround solves the problem but is much slower than native batched execution.
  • It looks like batched execution with default.mixed, torch, and backprop tries to convert a grad-tracking tensor to numpy, breaking autograd.

Output of qml.about():

Name: PennyLane
Version: 0.41.1
Summary: PennyLane is a cross-platform Python library for quantum computing, quantum machine learning, and quantum chemistry. Train a quantum computer the same way as a neural network.
Home-page: GitHub - PennyLaneAI/pennylane: PennyLane is a cross-platform Python library for quantum computing, quantum machine learning, and quantum chemistry. Built by researchers, for research.
Author:
Author-email:
License: Apache License 2.0
Location: /home/domi/projects/.venvs/qml/lib/python3.12/site-packages
Requires: appdirs, autograd, autoray, cachetools, diastatic-malt, networkx, numpy, packaging, pennylane-lightning, requests, rustworkx, scipy, tomlkit, typing-extensions
Required-by: PennyLane-qiskit, PennyLane_Lightning, PennyLane_Lightning_GPU

Platform info: Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.39
Python version: 3.12.3
Numpy version: 1.26.4
Scipy version: 1.15.3
Installed devices:

  • qiskit.aer (PennyLane-qiskit-0.41.0.post0)
  • qiskit.basicaer (PennyLane-qiskit-0.41.0.post0)
  • qiskit.basicsim (PennyLane-qiskit-0.41.0.post0)
  • qiskit.remote (PennyLane-qiskit-0.41.0.post0)
  • lightning.gpu (PennyLane_Lightning_GPU-0.41.1)
  • lightning.qubit (PennyLane_Lightning-0.41.1)
  • default.clifford (PennyLane-0.41.1)
  • default.gaussian (PennyLane-0.41.1)
  • default.mixed (PennyLane-0.41.1)
  • default.qubit (PennyLane-0.41.1)
  • default.qutrit (PennyLane-0.41.1)
  • default.qutrit.mixed (PennyLane-0.41.1)
  • default.tensor (PennyLane-0.41.1)
  • null.qubit (PennyLane-0.41.1)
  • reference.qubit (PennyLane-0.41.1)

I just realised this issue was resolved with last week’s release 0.42.0.

Thank you very much!

1 Like