Backpropagation with Pytorch

Hi @James_Ellis!

Is it possible to use backpropagation with default.qubit on Pytorch? Or only tensorflow?

Currently, standard backpropagation is supported using both TensorFlow and the default Autograd:

import pennylane as qml
from pennylane import numpy as np

dev = qml.device("default.qubit", wires=2)

@qml.qnode(dev, diff_method="backprop")
def circuit(weights):
	qml.RX(weights[0], wires=0)
	qml.RY(weights[1], wires=0)
	return qml.expval(qml.PauliZ(0))

weights = np.array([0.1, 0.2], requires_grad=True)

# compute the gradient via backprop
print(qml.grad(circuit)(weights))

Unfortunately, we cannot support PyTorch for standard backpropagation until PyTorch has full support for complex numbers.

Having said that, we have recently added a new feature to PennyLane v0.14, released just this week — adjoint backpropagation. This is a form of backpropagation that takes advantage of the fact that quantum computing is unitary/reversible, and thus has a reduced memory/speed overhead than standard backpropagation.

Adjoint backpropagation is implemented directly in PennyLane, so will support any interface, including PyTorch:

import pennylane as qml
import torch

dev = qml.device("default.qubit", wires=2)

@qml.qnode(dev, diff_method="adjoint", interface="torch")
def circuit(weights):
    qml.RX(weights[0], wires=0)
    qml.RY(weights[1], wires=0)
    return qml.expval(qml.PauliZ(0))

weights = torch.tensor([0.1, 0.2], requires_grad=True)
loss = circuit(weights)
loss.backward()
print(weights.grad)

The latest version of lightning.qubit (v0.14) is also compatible with the new adjoint differentiation method :slight_smile:

Note: As this is a new feature, if you notice any bugs or issues, please let us know via a GitHub issue!

1 Like