Parameters seem to be fail update when using AmplitudeEmbedding

Hello,

I’m trying to run simple qnode training code in qml.qnn.TorchLayer from official pennylane documentation.

I’ve modifed the original code - replaced the “AngleEmbedding” with “AmplitudeEmbedding” and removed some classical tensor linear layers to let nn.module has only qnode layers.

However, after each iteration, parameters in nn.model seems to have no update at all. Could anyone help me to find out which part is wrong?

Here’s my code(modified)

import numpy as np
import pennylane as qml
import torch
import sklearn.datasets
from pennylane.ops.qubit import CNOT

n_qubits = 2
dev = qml.device("default.qubit", wires=n_qubits)

@qml.qnode(dev, interface='torch')
def qnode(inputs, weights):
    #print(inputs)
    qml.templates.AmplitudeEmbedding(inputs, wires=[i for i in range(n_qubits)], normalize=True, pad=0.3)
    #qml.templates.AngleEmbedding(inputs, wires=range(n_qubits))
    qml.templates.StronglyEntanglingLayers(weights, wires=[i for i in range(n_qubits)], ranges=None, imprimitive=CNOT)
    
    return qml.expval(qml.PauliZ(0)), qml.expval(qml.PauliZ(1))

#qlayer = qml.qnn.TorchLayer(qnode, weight_shapes)
#clayer1 = torch.nn.Linear(2, 4)
#clayer2 = torch.nn.Linear(2, 2)
#softmax = torch.nn.Softmax(dim=1)
#model = torch.nn.Sequential(clayer1, qlayer, clayer2, softmax)

class FullyEntangledQNodeNN(torch.nn.Module):
    def __init__(self):
        super(FullyEntangledQNodeNN, self).__init__()
        #self.__nn_layer_type = nn_layer_type
        #self.__nn_layer_ver_type = nn_layer_ver_type

        #quantum_node, num_qubits = fully_entangled_node(input_dim)
        weight_shapes = {"weights": (1, n_qubits, 3)} # In this case, we will use 1 layer per qnode
        #self.quantum_layer = qml.qnn.TorchLayer(quantum_node, weight_shapes).double()
        #self.quantum_layer.weight = torch.nn.Parameter(torch.DoubleTensor(weight))
        
        self.qlayer = qml.qnn.TorchLayer(qnode, weight_shapes)
        #self.clayer1 = torch.nn.Linear(2, 4)
        self.clayer2 = torch.nn.Linear(2, 2)
        #self.softmax = torch.nn.Softmax(dim=1)

    def forward(self, x):
        output = None
        #output = self.clayer1(x)
        output = self.qlayer(x)
        #output = self.clayer2(output)
        #output = self.softmax(output)

        return output

model = FullyEntangledQNodeNN()

samples = 100
x, y = sklearn.datasets.make_moons(samples)
y_hot = np.zeros((samples, 2))
y_hot[np.arange(samples), y] = 1

X = torch.tensor(x).float()
Y = torch.tensor(y_hot).float()

opt = torch.optim.SGD(model.parameters(), lr=0.5)
loss = torch.nn.L1Loss()

epochs = 8
batch_size = 5
batches = samples // batch_size

data_loader = torch.utils.data.DataLoader(list(zip(X, Y)), batch_size=batch_size,
                                          shuffle=True, drop_last=True)

for epoch in range(epochs):

    running_loss = 0

    for x, y in data_loader:
        opt.zero_grad()

        loss_evaluated = loss(model(x), y)
        loss_evaluated.backward()

        opt.step()

        running_loss += loss_evaluated

    avg_loss = running_loss / batches
    print("Average loss over epoch {}: {:.4f}".format(epoch + 1, avg_loss))

result:

Average loss over epoch 1: 0.5575
Average loss over epoch 2: 0.5575
Average loss over epoch 3: 0.5575
Average loss over epoch 4: 0.5575
Average loss over epoch 5: 0.5575
Average loss over epoch 6: 0.5575
Average loss over epoch 7: 0.5575
Average loss over epoch 8: 0.5575

Hi @akawarren,

The AmplitudeEmbedding is non-differentiable, since it involves some nontrivial pre-processing of the inputs. You can see a warning for this in the docs: https://pennylane.readthedocs.io/en/stable/code/api/pennylane.templates.embeddings.AmplitudeEmbedding.html

We are thinking about some future upgrades to the library which will make this operation, and others involving pre-processing, naturally differentiable, but as of the current version, it is not.

1 Like

Hi @akawarren, as well as changing from AmplitudeEmbedding to, e.g., AngleEmbedding, you could also try removing the interface="torch" part of the @qml.qnode argument. The qml.qnn.TorchLayer() converts to a Torch QNode internally.

1 Like