I am trying to use MottonenStatePrep
for batched circuits in a TorchLayer
, and have been running into issues. For some of my input state vectors I have no issues, but for others I do. While I understand MottonenStatePrep
has not been fully tested for differentiability, it is not clear to me that is where there is an issue, as the error arries in pennylane\qnn\torch.py
before I even try to update parameters. Here is the following error:
File "User\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "User\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "User\site-packages\pennylane\qnn\torch.py", line 406, in forward
results = torch.reshape(results, (*batch_dims, *results.shape[1:]))
RuntimeError: shape '[8]' is invalid for input of size 1
The code that produces this error:
import torch
import torch.nn as nn
import pennylane as qml
import numpy as np
def QuantumLayer():
n_qubits = 3
dev = qml.device("default.qubit", wires=n_qubits)
def _circuit(inputs, weights):
qml.MottonenStatePreparation(state_vector=inputs, wires=[0,1,2])
qml.RY(phi=weights, wires=[0])
return qml.expval(qml.PauliZ(wires=0))
qlayer = qml.QNode(_circuit, dev, interface="torch")
weight_shapes = {"weights": (1)}
return qml.qnn.TorchLayer(qlayer, weight_shapes)
# Define a simple PyTorch model class
class SimpleQuantumModel(nn.Module):
def __init__(self):
super(SimpleQuantumModel, self).__init__()
self.quantum_layer = QuantumLayer()
def forward(self, x):
return self.quantum_layer(x)
# Example usage
model = SimpleQuantumModel()
numpy_data = np.load("mottoen_test.npz", allow_pickle=True)
features = torch.tensor(numpy_data['data'], dtype=torch.float32, requires_grad=True)[160:168]
#FAILS
#0:168
#161:168
#160:170
#SUCCESSES
#0:167
print(features.shape)
print(features)
output = model(features)
print(output)
The output before the error:
torch.Size([8, 8])
tensor([[0.0000, 1.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
[0.0000, 1.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
[0.0000, 0.3273, 0.9449, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
[0.0000, 0.5672, 0.8236, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
[0.5823, 0.4304, 0.6897, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
[0.0219, 0.7780, 0.6279, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
[0.7041, 0.4822, 0.5212, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
[0.7641, 0.0000, 0.6451, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000]],
grad_fn=<SliceBackward0>)
The data from mottoen_test.npz
in the minimal code is shape (1453,8)
. When I try to batch all of these at once, I get the same error:
RuntimeError: shape '[1453]' is invalid for input of size 1
I cannot seem to isolate it to problematic input data (they are all normalized vectors), as the slices0:168
fail, but 165:168
doesn’t for example. Below is the example tensor from above that failed:
state_vectors= [[0.0000, 1.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
[0.0000, 1.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
[0.0000, 0.3273, 0.9449, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
[0.0000, 0.5672, 0.8236, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
[0.5823, 0.4304, 0.6897, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
[0.0219, 0.7780, 0.6279, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
[0.7041, 0.4822, 0.5212, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
[0.7641, 0.0000, 0.6451, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000]]
features= torch.tensor(state_vectors, requires_grad=True)
Thank you for the help.
System Information:
Name: PennyLane
Version: 0.33.1
Summary: PennyLane is a Python quantum machine learning library by Xanadu Inc.
Home-page: https://github.com/PennyLaneAI/pennylane
Author:
Author-email:
License: Apache License 2.0
Location: /usr/local/lib/python3.10/dist-packages
Requires: appdirs, autograd, autoray, cachetools, networkx, numpy, pennylane-lightning, requests, rustworkx, scipy, semantic-version, toml, typing-extensions
Required-by: PennyLane-Lightning
Platform info: Linux-5.15.120+-x86_64-with-glibc2.35
Python version: 3.10.12
Numpy version: 1.23.5
Scipy version: 1.11.3