Hi, I’m trying to implement a hybrid NN model that takes two inputs x1 and x2. The two inputs goes through a same classical layer, and the two outputs are fed into quantum layer.
@qml.qnode(dev)
def qnode(inputs):
qml.U3(inputs[0], inputs[1], inputs[2], wires=0)
qml.U3(inputs[3], inputs[4], inputs[5], wires=1)
return qml.probs(wires=[0,1])
class HybridModel(torch.nn.Module):
def __init__(self):
super().__init__()
self.clayer = torch.nn.Linear(2, 3)
self.qlayer = qml.qnn.TorchLayer(qnode, weight_shapes={})
def forward(self, x1, x2):
x1 = self.clayer(x1)
x2 = self.clayer(x2)
x = torch.concat([x1, x2])
x = self.qlayer(x)
return x[0]
model = HybridModel()
When training the model, the parameters don’t change at all.
data_loader = torch.utils.data.DataLoader(
list(zip(X1, X2, y)), batch_size=20, shuffle=True, drop_last=True
)
opt = torch.optim.SGD(model.parameters(), lr=0.2)
loss_fn = torch.nn.MSELoss()
epochs = 5
for epoch in range(epochs):
running_loss = 0
for x1s, x2s, ys in data_loader:
opt.zero_grad()
preds = [model(x1, x2) for x1, x2 in zip(x1s, x2s)]
preds = torch.Tensor(preds)
loss_evaluated = loss_fn(preds, ys)
loss_evaluated.requires_grad = True
loss_evaluated.backward()
print(model.get_parameter('clayer.weight'))
opt.step()
running_loss += loss_evaluated
avg_loss = running_loss / 2000
print("avg loss: " + str(avg_loss))
The code outputs,
Parameter containing:
tensor([[ 0.6520, -0.0508],
[ 0.5716, -0.5352],
[-0.7030, 0.2188]], requires_grad=True)
Parameter containing:
tensor([[ 0.6520, -0.0508],
[ 0.5716, -0.5352],
[-0.7030, 0.2188]], requires_grad=True)
Parameter containing:
tensor([[ 0.6520, -0.0508],
[ 0.5716, -0.5352],
[-0.7030, 0.2188]], requires_grad=True)
…
without parameters changing.
Here, I only have trainable parameters with classical layer. So, I declared “weight_shapes={}” when making a quantum layer. I am not sure if this is a problem or not.
What am I doing wrong here? Here is the minimal code example. Sorry I couldn’t attach the file as I am a new user.