New return prevents using weight parameters as batched inputs

Under the old return, the arbitrary unitary weights accept batch parameters under the new return system the weights do not accept the batched parameters.

Keeping the same syntax as the old return has the state vector complains it doesn’t have a batchsize while adding the batch dimension has an arbitrary unitary complains it has batch weight.

Expected behavior: A batched circuit execution. In this case, the algebra is a batch vector-matrix multiplication einsum(‘bij,bi → bi’) as in old return.

# Put code here
import numpy as np
import torch
import pennylane as qml

# Create toy experiment
nbwires = 4
Batchsize = 32
dev = qml.device("default.qubit", wires=nbwires)
wires = np.arange(nbwires)
state = torch.nn.functional.normalize(torch.rand((Batchsize,2**nbwires)), dim=1)
matrix = torch.rand((Batchsize,4 ** nbwires - 1))
entry = torch.cat((state,matrix), 1)
print(entry.shape)
print(state.shape)

# @qml.batch_input(argnum=1)
#First error on the weight shapes
@qml.qnode(dev, diff_method="backprop", interface="torch")
def circuit_newreturn_err1(inputs):
    qml.QubitStateVector((inputs[:, 0:2**nbwires]), wires=wires)
    unitary = qml.ArbitraryUnitary(weights=inputs[:, 2**nbwires:4 ** nbwires - 1+2**nbwires], wires=wires)
    qml.apply(unitary)
    return qml.probs(wires)

#Second error on the weight shapes
@qml.qnode(dev, diff_method="backprop", interface="torch")
def circuit_newreturn_err2(inputs):
    qml.QubitStateVector((inputs[0:2**nbwires]), wires=wires)
    unitary = qml.ArbitraryUnitary(weights=inputs[2**nbwires:4 ** nbwires - 1+2**nbwires], wires=wires)
    qml.apply(unitary)
    return qml.probs(wires)

#old returns works
@qml.qnode(dev, diff_method="backprop", interface="torch")
def circuit_oldreturn(inputs):
    qml.QubitStateVector((inputs[0:2**nbwires]), wires=wires)
    unitary = qml.ArbitraryUnitary(weights=inputs[2**nbwires :4 ** nbwires - 1+2**nbwires], wires=wires)
    qml.apply(unitary)
    return qml.probs(wires)

newlayererr1  = qml.qnn.TorchLayer(circuit_newreturn_err1, weight_shapes={})
newlayererr2  = qml.qnn.TorchLayer(circuit_newreturn_err2, weight_shapes={})
oldlayer  = qml.qnn.TorchLayer(circuit_oldreturn, weight_shapes={})

qml.disable_return()
outold = oldlayer(entry)
print(outold)
qml.enable_return() 
# comment  err1 line to obtain err2
outnewerr1 = newlayererr1(entry)
print(outnewerr1)

outnewerr2 = newlayererr2(entry)
print(outnewerr2)

Both error message

# Put full error message here
#error 1
Traceback (most recent call last):
  File "/home/al/PycharmProjects/pennylanetest/example.py", line 52, in <module>
    outnewerr1 = newlayererr1(entry)
  File "/home/al/PycharmProjects/pennylanetest/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/al/PycharmProjects/pennylanetest/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/al/PycharmProjects/pennylanetest/venv/lib/python3.10/site-packages/pennylane/qnn/torch.py", line 408, in forward
    results = self._evaluate_qnode(inputs)
  File "/home/al/PycharmProjects/pennylanetest/venv/lib/python3.10/site-packages/pennylane/qnn/torch.py", line 429, in _evaluate_qnode
    res = self.qnode(**kwargs)
  File "/home/al/PycharmProjects/pennylanetest/venv/lib/python3.10/site-packages/pennylane/qnode.py", line 974, in __call__
    self.construct(args, kwargs)
  File "/home/al/PycharmProjects/pennylanetest/venv/lib/python3.10/site-packages/pennylane/qnode.py", line 872, in construct
    self._tape = make_qscript(self.func, shots)(*args, **kwargs)
  File "/home/al/PycharmProjects/pennylanetest/venv/lib/python3.10/site-packages/pennylane/tape/qscript.py", line 1531, in wrapper
    result = fn(*args, **kwargs)
  File "/home/al/PycharmProjects/pennylanetest/example.py", line 21, in circuit_newreturn_err1
    unitary = qml.ArbitraryUnitary(weights=inputs[:, 2**nbwires:4 ** nbwires - 1+2**nbwires], wires=wires)
  File "/home/al/PycharmProjects/pennylanetest/venv/lib/python3.10/site-packages/pennylane/templates/subroutines/arbitrary_unitary.py", line 100, in __init__
    raise ValueError(
ValueError: Weights tensor must be of shape (255,); got (32, 255).

#error 2

Traceback (most recent call last):
  File "/home/al/PycharmProjects/pennylanetest/example.py", line 55, in <module>
    outnewerr2 = newlayererr2(entry)
  File "/home/al/PycharmProjects/pennylanetest/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/al/PycharmProjects/pennylanetest/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/al/PycharmProjects/pennylanetest/venv/lib/python3.10/site-packages/pennylane/qnn/torch.py", line 408, in forward
    results = self._evaluate_qnode(inputs)
  File "/home/al/PycharmProjects/pennylanetest/venv/lib/python3.10/site-packages/pennylane/qnn/torch.py", line 429, in _evaluate_qnode
    res = self.qnode(**kwargs)
  File "/home/al/PycharmProjects/pennylanetest/venv/lib/python3.10/site-packages/pennylane/qnode.py", line 974, in __call__
    self.construct(args, kwargs)
  File "/home/al/PycharmProjects/pennylanetest/venv/lib/python3.10/site-packages/pennylane/qnode.py", line 872, in construct
    self._tape = make_qscript(self.func, shots)(*args, **kwargs)
  File "/home/al/PycharmProjects/pennylanetest/venv/lib/python3.10/site-packages/pennylane/tape/qscript.py", line 1531, in wrapper
    result = fn(*args, **kwargs)
  File "/home/al/PycharmProjects/pennylanetest/example.py", line 28, in circuit_newreturn_err2
    qml.QubitStateVector((inputs[0:2**nbwires]), wires=wires)
  File "/home/al/PycharmProjects/pennylanetest/venv/lib/python3.10/site-packages/pennylane/ops/qubit/state_preparation.py", line 175, in __init__
    raise ValueError("State vector must have shape (2**wires,) or (batch_size, 2**wires).")
ValueError: State vector must have shape (2**wires,) or (batch_size, 2**wires).

Process finished with exit code 1

qml.about():

Platform info: Linux-6.2.0-33-generic-x86_64-with-glibc2.35
Python version: 3.10.12
Numpy version: 1.23.5
Scipy version: 1.10.0
Installed devices:

  • lightning.kokkos (PennyLane-Lightning-Kokkos-0.32.0)
  • default.gaussian (PennyLane-0.32.0)
  • default.mixed (PennyLane-0.32.0)
  • default.qubit (PennyLane-0.32.0)
  • default.qubit.autograd (PennyLane-0.32.0)
  • default.qubit.jax (PennyLane-0.32.0)
  • default.qubit.tf (PennyLane-0.32.0)
  • default.qubit.torch (PennyLane-0.32.0)
  • default.qutrit (PennyLane-0.32.0)
  • null.qubit (PennyLane-0.32.0)
  • lightning.qubit (PennyLane-Lightning-0.32.0)
    None

Hi @coot,

I’m sorry to see that your workflow no longer works. The “problem” is that with the new return type system, TorchLayer does not take care of the batching anymore, but we rely on the operations to be batching-compatible. This is why in the old return type system, you did not see that ArbitraryUnitary does not support batching itself yet. This can easily be fixed, but will likely only be merged after the release of 0.33, so that the first official version with support would be 0.34, or you’d have to install from the Github repository.
As far as I can tell, you have the following options:

  • Wait for the pull request to be merged and install from the master branch of the repository
  • Copy the modifications from the pull request once it’s up, or install from the PR’s branch.
  • Call the decomposition of ArbitraryUnitary yourself and use the new device API for that: Instead of
dev = qml.device("default.qubit", wires=nbwires)
...
qml.ArbitraryUnitary(weights, wires)

do

dev = qml.devices.experimental.DefaultQubit2() # no need to pass the number of wires :)
...
qml.ArbitraryUnitary.compute_decomposition(qml.math.transpose(weights), wires)

(Note the transpose of the weights. It’s again needed because ArbitraryUnitary is not accustomed to batching yet). Essentially, with this workaround you’re mimicking what our stable solution will do: The new device API will take over in 0.33 and the fix for ArbitraryUnitary will be there probably in 0.34

  • Use a different operation that supports batching.
  • Iterate manually over the batching dimension (probably not so neat…)

I hope this helps, let me know if you have follow-up questions or require more details!

Happy coding! :slight_smile:

update: The PR already has been merged :slight_smile:

Thank you very much sorry for late reply busy weekend. Since it was merged before the release of 0.33 does this mean the fix has been done for 0.33 ?

Is there way in the documentation to know which operation support batching currently ?

1 Like

Hi @coot,

It didn’t make it into version 0.33 but it’s already in Master in GitHub.

I don’t think we have the information about batching for every operation in the documentation. But if you run into issues with any particular operation please let us know here and we’ll investigate whether it’s a know issue. :smiley:

Hi @coot,

Some updates found by my colleague Christina!

Here in the docs we list the operations that support broadcasting. However this lies in a very hidden part of the docs so it may not be 100% accurate.

To support broadcasting, ndim_params has to be overridden, so if you run the following code it can give you a fairly reliable proxy of whether broadcasting is supported for a particular operator.

type(op).ndim_params != qml.operation.Operator.ndim_params

Please let me know if you have any questions.

I hope this helps!