Hi,
I have a question regarding the assignment of parameters (i.e. weights) of a quantum layer in PyTorch. I am not quite sure if that is a question regarding PennyLane or PyTorch.
I create a QNode and use TorchLayer
to create a quantum layer as shown below:
import pennylane as qml
import torch
torch.manual_seed(42)
device = qml.device('default.qubit', wires=1)
@qml.qnode(device)
def circuit(inputs, weights_1):
qml.RX(inputs, wires=0)
qml.RY(weights_1, wires=0)
return qml.expval(qml.PauliZ(wires=0))
weight_shape = {"weights_1": (1,)}
class QuantumNN(torch.nn.Module):
def __init__(self):
super(QuantumNN, self).__init__()
self.qlayer = qml.qnn.TorchLayer(circuit, weight_shape)
def forward(self, xx):
xx = self.qlayer(xx)
return xx
As far as I understand, PennyLane chooses some random parameters for the weights of the model between 0 and 2*pi, and I can check it with the following lines:
model = QuantumNN()
print(model.qlayer.qnode_weights)
print(model.qlayer.weights_1)
which gives
{'weights_1': Parameter containing:
tensor([5.5435], requires_grad=True)}
Parameter containing:
tensor([5.5435], requires_grad=True)
But, now if I reassign the parameters with
rand_weights = torch.nn.Parameter(torch.rand(1,))
model.qlayer.weights_1 = rand_weights
print(model.qlayer.qnode_weights)
print(model.qlayer.weights_1)
the output of qnode_weights
and weights_1
is different:
{'weights_1': Parameter containing:
tensor([5.5435], requires_grad=True)}
Parameter containing:
tensor([0.9150], requires_grad=True)
Is this the expected behaviour, and if so, why?