Question about TorchLayer

I meet a problem when I use qml.TorchLayer. I want to use my own way to define the encoding layer. So I use the following way to encoding the inputs.

for i in range(n_qubits):
    qml.RY(inputs[i], wires=i)

But I get the error as follows.

RuntimeError: shape '[5, -1]' is invalid for input of size 2

And when I use ‘qml.AngleEmbedding(inputs, wires=range(n_qubits), rotation=‘Y’)’, it works. Since I haven’t updated my pennylane. I think these two are the same things in the previous version, right? But in the latest version, it seems not. Can anybody help me with this? If I want to use my own encoding layer in the latest version, what should I do?

Any help will be appreciate.

Hello @cheng !

Would you mind sending me a minimal working version of your code that reproduces this error? And can you please post the output of qml.about() and your full error traceback?

I think taking a look at the templates documentation might be helpful for you.

Cheers,
Ludmila

Thanks for your reply. Here is the complete code.

import numpy as np
import pennylane as qml
import torch
import sklearn.datasets

n_qubits = 2
dev = qml.device("default.qubit", wires=n_qubits)

@qml.qnode(dev)
def qnode(inputs, weights):
    for i in range(n_qubits):
        qml.RY(inputs[i], wires=i)
    # # qml.AngleEmbedding(inputs, wires=range(n_qubits), rotation='Y')
    qml.templates.StronglyEntanglingLayers(weights, wires=range(n_qubits))
    return qml.expval(qml.PauliZ(0)), qml.expval(qml.PauliZ(1))

weight_shapes = {"weights": (3, n_qubits, 3)}

qlayer = qml.qnn.TorchLayer(qnode, weight_shapes)
clayer1 = torch.nn.Linear(2, 2)
clayer2 = torch.nn.Linear(2, 2)
softmax = torch.nn.Softmax(dim=1)
model = torch.nn.Sequential(clayer1, qlayer, clayer2, softmax)

samples = 100
x, y = sklearn.datasets.make_moons(samples)
y_hot = np.zeros((samples, 2))
y_hot[np.arange(samples), y] = 1

X = torch.tensor(x).float()
Y = torch.tensor(y_hot).float()

opt = torch.optim.SGD(model.parameters(), lr=0.5)
loss = torch.nn.L1Loss()


epochs = 8
batch_size = 5
batches = samples // batch_size

data_loader = torch.utils.data.DataLoader(list(zip(X, Y)), batch_size=batch_size,
                                          shuffle=True, drop_last=True)

for epoch in range(epochs):

    running_loss = 0

    for x, y in data_loader:
        opt.zero_grad()

        loss_evaluated = loss(model(x), y)
        loss_evaluated.backward()

        opt.step()

        running_loss += loss_evaluated

    avg_loss = running_loss / batches
    print("Average loss over epoch {}: {:.4f}".format(epoch + 1, avg_loss))

Thank you for the code! Now, would mind send me the output of qml.about()?

I think you meant to say that by substituting RY for AngleEmbedding, the code works, right? I believe that the problem is you are trying to batch inputs/results, but not passing a proper parameter that’s supposed to do the batching for you. Make sure that the arguments (inputs weight) shapes and sizes match your circuit.

I hope it helps! :slight_smile:

Thanks again for your patience.
The same code can be run in PennyLane==0.26.0.
So does it mean that all the parameter sizes are match?

Hello,

I just meant that you have to pass the arguments with proper dimensions to the functions and/or templates you would like to use. It does not mean that input and weights should have the same dimensions, you should just be careful to which functions you are passing those parameters. :slight_smile:

On this version, pennylane didn’t have batch support, so it was basically calling the qnode several times. The problem in your code is that the for-loop in the qnode assumes it isn’t batched, that’s why you are getting the error. Try using qml.RY(inputs[…, i], wires=i)!

I hope this will help you to fix your code! :pray: