TorchLayer multiple input of different dimension

It appears that the qml.qnn.TorchLayer() although allow multiple weights, it only allows the qnode has one inputs argument, and requires packing multiple inputs input a torch array and using index to access them. In the following example, I would like to have the circuit() qnode accept two arrays of different length as inputs, so that the two QAOAEmbedding can accept different input array length, is it possible to do that?

import pennylane as qml
import torch
import torch.nn as nn

n_qbits = 8
m = 4
layer = 2
qdev = qml.device("default.qubit", n_qbits)


@qml.qnode(qdev, interface="torch")
def circuit(inputs, weights_a, weights_b):
    qml.QAOAEmbedding(inputs[0], weights=weights_a, wires=range(n_qbits))
    qml.adjoint(qml.QAOAEmbedding(inputs[1], weights=weights_b, wires=range(n_qbits)))
    return qml.probs(wires=range(m))

class QSim(nn.Module):
    def __init__(self):
        super().__init__()
        self.qlayer = qml.qnn.TorchLayer(
            circuit,
            weight_shapes={
                "weights_a": (layer, 2 * n_qbits),
                "weights_b": (layer, 2 * n_qbits),
            },
        )

    def forward(self, a, b):
        return self.qlayer(torch.stack([a, b], dim=1))[:, 0]


model = QSim().to(device)
# batch num = 8, array size = 4
model(torch.randn(8, 4).to(device), torch.randn(8, 4).to(device))

# tensor([0.0367, 0.0399, 0.0973, 0.0499, 0.1242, 0.0184, 0.1547, 0.0551],
#       device='cuda:0', grad_fn=<SelectBackward0>)

To clarify: I’d like to be able to call

# 8 is the batch size
model(torch.randn(8, 5).to(device), torch.randn(8, 3).to(device))

and the result would be an 8 element array.

Hi @LdBeth ,

Can you please post the full error traceback and the output of qml.about()?

Ah, I missed the line

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

and the error for

model(torch.randn(8, 5).to(device), torch.randn(8, 3).to(device))

would be

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
Cell In[3], line 1
----> 1 model(torch.randn(8, 5).to(device), torch.randn(8, 3).to(device))

File ~/miniconda3/envs/jupyter/lib/python3.12/site-packages/torch/nn/modules/module.py:1553, in Module._wrapped_call_impl(self, *args, **kwargs)
   1551     return self._compiled_call_impl(*args, **kwargs)  # type: ignore[misc]
   1552 else:
-> 1553     return self._call_impl(*args, **kwargs)

File ~/miniconda3/envs/jupyter/lib/python3.12/site-packages/torch/nn/modules/module.py:1562, in Module._call_impl(self, *args, **kwargs)
   1557 # If we don't have any hooks, we want to skip the rest of the logic in
   1558 # this function, and just call forward.
   1559 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
   1560         or _global_backward_pre_hooks or _global_backward_hooks
   1561         or _global_forward_hooks or _global_forward_pre_hooks):
-> 1562     return forward_call(*args, **kwargs)
   1564 try:
   1565     result = None

Cell In[2], line 29, in QSim.forward(self, a, b)
     28 def forward(self, a, b):
---> 29     return self.qlayer(torch.stack([a, b], dim=1))[:, 0]

RuntimeError: stack expects each tensor to be equal size, but got [8, 5] at entry 0 and [8, 3] at entry 1
Name: PennyLane
Version: 0.37.0
Summary: PennyLane is a cross-platform Python library for quantum computing, quantum machine learning, and quantum chemistry. Train a quantum computer the same way as a neural network.
Home-page: https://github.com/PennyLaneAI/pennylane
Author: 
Author-email: 
License: Apache License 2.0
Location: /home/ldb/miniconda3/envs/jupyter/lib/python3.12/site-packages
Requires: appdirs, autograd, autoray, cachetools, networkx, numpy, packaging, pennylane-lightning, requests, rustworkx, scipy, semantic-version, toml, typing-extensions
Required-by: PennyLane_Lightning, PennyLane_Lightning_GPU

Platform info:           Linux-5.15.153.1-microsoft-standard-WSL2-x86_64-with-glibc2.38
Python version:          3.12.2
Numpy version:           1.26.4
Scipy version:           1.14.1
Installed devices:
- lightning.qubit (PennyLane_Lightning-0.37.0)
- lightning.gpu (PennyLane_Lightning_GPU-0.37.0)
- default.clifford (PennyLane-0.37.0)
- default.gaussian (PennyLane-0.37.0)
- default.mixed (PennyLane-0.37.0)
- default.qubit (PennyLane-0.37.0)
- default.qubit.autograd (PennyLane-0.37.0)
- default.qubit.jax (PennyLane-0.37.0)
- default.qubit.legacy (PennyLane-0.37.0)
- default.qubit.tf (PennyLane-0.37.0)
- default.qubit.torch (PennyLane-0.37.0)
- default.qutrit (PennyLane-0.37.0)
- default.qutrit.mixed (PennyLane-0.37.0)
- default.tensor (PennyLane-0.37.0)
- null.qubit (PennyLane-0.37.0)

To restate my problem, it seems qml.qnn.TorchLayer() requires the argument of the functions contains one inputs that is one pytorch tensor. But for my purpose I wish to able to train a circuit with at least two different sizes (and less than the number of qbits) input tensors. For tensors with same size, I can stack them, and the above setup works, but for tensors of different sizes the shorts one would be zero padded.

Hi @LdBeth , can you please update to the latest version of PennyLane and let us know if you’re still experiencing the issue with this version?

You can use python -m pip install pennylane --upgrade
to upgrade to PennyLane v0.39 which is the latest version at the moment.

sorry for the late reply. I’m using conda which only has v0.37 as the latest version on conda-forge, so I don’t know how to upgrade in my case.

Also I find out that TorchLayer() gives unexpected result when input is 3 dimension:

model = QSim().to(device)

a = torch.tensor(np.reshape(np.repeat([[1, 0]], 8, axis=0), [8, 2])).to(device)
b = torch.tensor(np.reshape(np.repeat([[0, 0.7]], 8, axis=0), [8, 2])).to(device)
model(a, b)
# tensor([0.1073, 0.0235, 0.0760, 0.0363, 0.0897, 0.0495, 0.0705, 0.0362],
#       device='cuda:0', dtype=torch.float64, grad_fn=<SelectBackward0>)

I expect that since the inputs are identical on the “batch dimension”, the outputs should be all identical, but they are not.

Basically the goal is to train a SWAP test classifier that encodes two parameters using QAOAEmbedding, after reading some articles I think I might able to train two different set of parameters on two separate circuits, but I still need to know how to compare the results of two quantum circuit outputs using a third circuit and using pytorch to optimize it.

Hi @LdBeth ,

Are you able to test your code on Google Colab for example? You can also create a new virtual environment with venv and try to run the latest PennyLane version there. Let me know if this works for you, the instructions below should help you set up your environment with venv.

PennyLane installation instructions 2024.pdf (82.1 KB)

Tried on Colab with 0.39.0, still it produces an result that is unexpected to me.

!pip install pennylane pennylane-qiskit
import pennylane as qml
import torch
import torch.nn as nn

n_qbits = 8
m = 4
layer = 2
qdev = qml.device("default.qubit", n_qbits)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

@qml.qnode(qdev, interface="torch")
def circuit(inputs, weights_a, weights_b):
    qml.QAOAEmbedding(inputs[0], weights=weights_a, wires=range(n_qbits))
    qml.adjoint(qml.QAOAEmbedding(inputs[1], weights=weights_b, wires=range(n_qbits)))
    return qml.probs(wires=range(m))

class QSim(nn.Module):
    def __init__(self):
        super().__init__()
        self.qlayer = qml.qnn.TorchLayer(
            circuit,
            weight_shapes={
                "weights_a": (layer, 2 * n_qbits),
                "weights_b": (layer, 2 * n_qbits),
            },
        )

    def forward(self, a, b):
        return self.qlayer(torch.stack([a, b], dim=1))[:, 0]


model = QSim().to(device)
a = torch.tensor(np.reshape(np.repeat([[1, 0]], 8, axis=0), [8, 2])).to(device)
b = torch.tensor(np.reshape(np.repeat([[0, 0.7]], 8, axis=0), [8, 2])).to(device)
print(a)
print(b)
print(model(a, b))

I expect the ouput to be all identical for the purpose of batching a 2D input.

tensor([[1, 0],
        [1, 0],
        [1, 0],
        [1, 0],
        [1, 0],
        [1, 0],
        [1, 0],
        [1, 0]])
tensor([[0.0000, 0.7000],
        [0.0000, 0.7000],
        [0.0000, 0.7000],
        [0.0000, 0.7000],
        [0.0000, 0.7000],
        [0.0000, 0.7000],
        [0.0000, 0.7000],
        [0.0000, 0.7000]], dtype=torch.float64)
tensor([0.1545, 0.2610, 0.0142, 0.0082, 0.0576, 0.1005, 0.0554, 0.1029],
       dtype=torch.float64, grad_fn=<SelectBackward0>)

Hi @LdBeth ,

It looks like your code isn’t doing what you think it’s doing.

Note that what you pass to the model are the features, not the weights. The weights get automatically calculated by Torch.

If you’re interested in learning about using PyTorch with PennyLane you can use this demo.

It might help if you simplify your model or use print statements to help debug.
Let me know if you have any further questions.