Assign new parameters to a quantum layer in PyTorch

Hi,
I have a question regarding the assignment of parameters (i.e. weights) of a quantum layer in PyTorch. I am not quite sure if that is a question regarding PennyLane or PyTorch.
I create a QNode and use TorchLayer to create a quantum layer as shown below:

import pennylane as qml
import torch

torch.manual_seed(42)

device = qml.device('default.qubit', wires=1)

@qml.qnode(device)
def circuit(inputs, weights_1):
    qml.RX(inputs, wires=0)
    qml.RY(weights_1, wires=0)
    return qml.expval(qml.PauliZ(wires=0))

weight_shape = {"weights_1": (1,)}

class QuantumNN(torch.nn.Module):

    def __init__(self):
        super(QuantumNN, self).__init__()
        self.qlayer = qml.qnn.TorchLayer(circuit, weight_shape)

    def forward(self, xx):
        xx = self.qlayer(xx)
        return xx

As far as I understand, PennyLane chooses some random parameters for the weights of the model between 0 and 2*pi, and I can check it with the following lines:

model = QuantumNN()

print(model.qlayer.qnode_weights)
print(model.qlayer.weights_1)

which gives

{'weights_1': Parameter containing:
tensor([5.5435], requires_grad=True)}
Parameter containing:
tensor([5.5435], requires_grad=True)

But, now if I reassign the parameters with

rand_weights = torch.nn.Parameter(torch.rand(1,))
model.qlayer.weights_1 = rand_weights

print(model.qlayer.qnode_weights)
print(model.qlayer.weights_1)

the output of qnode_weights and weights_1 is different:

{'weights_1': Parameter containing:
tensor([5.5435], requires_grad=True)}
Parameter containing:
tensor([0.9150], requires_grad=True)

Is this the expected behaviour, and if so, why?

1 Like

Hey @nilserik,

Interesting question! There’s two things happening when parameters get initialized using TorchLayer, all of which can be distilled down to lines 511-516 here: pennylane/pennylane/qnn/torch.py at d5379eeeae31986e8efe0a5017fd65a5869d8429 · PennyLaneAI/pennylane · GitHub

  1. The qnode_weights attribute is populated according to the parameter initialization strategy.
  2. The parameters are “registered” using the register_parameter method in nn.Module (see here: Module — PyTorch 2.2 documentation). What this does is allow for the parameter to be accessed as an attribute using its given name (i.e., that’s why you can call model.qlayer.weights_1.

These attributes become out-of-sync when you change one or the other separately, which is what you’re seeing. I’m not sure if this is intentional on our part, but I will double check this!