Hey @Yan_Li! It doesn’t look like you’re using it, but we have a dedicated module in Pennylane for handling how Pennylane interfaces with Pytorch. You can check it out here: Turning quantum nodes into Torch Layers — PennyLane documentation
Currently, true parameter broadcasting within qnn.TorchLayer
isn’t supported. I.e., you can still “broadcast”, but what happens under the hood is a serial execution, not parallel. So, it looks like broadcasting is happening, but really isn’t.
There’s a PR for this on our Github that will add support for true broadcasting if you’re interested in following its progress!
Regarding your specific error, adjoint
doesn’t support parameter-broadcasting… so the error you’re getting is a little strange. In any case, we should have a more verbose and informative error message.
In the mean time, what you can do is:
- Add
qml.trasnforms.broadcast_expand
to the qnode:
@qml.transforms.broadcast_expand
@qml.qnode(dev, interface="torch", diff_method=diff_method)
def qnode(inputs,weights):
# print("excute qnode ",weights.shape,inputs.shape)
qml.AmplitudeEmbedding(features=inputs, wires=range(n_qubits),pad_with=0.,normalize=True)
# qml.Hadamard(wires=n_qubits-1)
weights = weights.reshape(-1,2)
qml.MPS(
wires=range(n_qubits),
n_block_wires=2,
block=block,
n_params_block=2,
template_weights=weights,
)
r = qml.PauliZ(wires=n_qubits-1)
# print("r ",r)
return qml.expval(r)
- switch to a different differentiation method (backprop)
- switch to a different device that doesn’t think its should be supporting parameter broadcasting