Error "Broadcasting with MottonenStatePreparation is not supported." when using AmplitudeEmbedding after other operators

I tried to do data re-uploading technique by using AmplitudeEmbedding

# Define quantum circuit
@qml.qnode(dev, interface='torch')
def quantum_layer(inputs, weights):
    # Encoding
    state = inputs_to_state(inputs)
    for i in range(weights.shape[0]):
        qml.AmplitudeEmbedding(state, wires=range(num_qubits))
        qml.StronglyEntanglingLayers(weights=weights[i].unsqueeze(0), wires=range(num_qubits))
    return qml.expval(qml.PauliZ(0))

class QuantumModel(Module):
    def __init__(self, reps=1):
        super().__init__()
        self.qlayer = qml.qnn.TorchLayer(quantum_layer, 
                                         weight_shapes={'weights': (reps, num_qubits, 3)},
                                         init_method = {
                                            "weights": torch.nn.init.normal_,
                                        })

    def forward(self, x):
        x = torch.flatten(x, start_dim=-3)
        x = self.qlayer(x)
        x = (x + 1)/2
        return x

This error occurs when I tried run it with reps more than 1.

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
Cell In[197], line 1
----> 1 output = model(input_data)
      2 output

File \venv\lib\site-packages\torch\nn\modules\module.py:1553, in Module._wrapped_call_impl(self, *args, **kwargs)
   1551     return self._compiled_call_impl(*args, **kwargs)  # type: ignore[misc]
   1552 else:
-> 1553     return self._call_impl(*args, **kwargs)

File \venv\lib\site-packages\torch\nn\modules\module.py:1562, in Module._call_impl(self, *args, **kwargs)
   1557 # If we don't have any hooks, we want to skip the rest of the logic in
   1558 # this function, and just call forward.
   1559 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
   1560         or _global_backward_pre_hooks or _global_backward_hooks
   1561         or _global_forward_hooks or _global_forward_pre_hooks):
-> 1562     return forward_call(*args, **kwargs)
   1564 try:
   1565     result = None

Cell In[194], line 21
     19 def forward(self, x):
     20     x = torch.flatten(x, start_dim=-3)
---> 21     x = self.qlayer(x)
...
    358     )
    360 a = qml.math.abs(state_vector)
    361 omega = qml.math.angle(state_vector)

ValueError: Broadcasting with MottonenStatePreparation is not supported. Please use the qml.transforms.broadcast_expand transform to use broadcasting with MottonenStatePreparation.

I also tried use the qml.transforms.broadcast_expand but im not sure if it’s correct way to do it and it’s very slow.
Thank you for the help.

Hi @Bank_Patamawisut ,

Welcome back to the Forum!

Could you please share a minimal reproducible example so that we can look for the root of the problem? A minimal reproducible example (or minimal working example) is the simplest version of the code that reproduces the problem. It should be self-contained, including all necessary imports, data, functions, etc., so that we can copy-paste the code and reproduce the problem. However it shouldn’t contain any unnecessary data, functions, …, for example gates and functions that can be removed to simplify the code.

If you’re not sure what this means then please make sure to check out this video.

Also please share the output of qml.about(). Thanks!

Hello. Can I ask if you managed to fix your issue? I have the same problem here, in the same situation: when applying more data reuploading with angleEmbedding()

The qml.about() is:
Name: PennyLane Version: 0.38.0 Summary: PennyLane is a cross-platform Python library for quantum computing, quantum machine learning, and quantum chemistry. Train a quantum computer the same way as a neural network. Home-page: GitHub - PennyLaneAI/pennylane: PennyLane is a cross-platform Python library for quantum computing, quantum machine learning, and quantum chemistry. Train a quantum computer the same way as a neural network. Author: Author-email: License: Apache License 2.0 Location: /home/paolo/.local/lib/python3.12/site-packages Requires: appdirs, autograd, autoray, cachetools, networkx, numpy, packaging, pennylane-lightning, requests, rustworkx, scipy, toml, typing-extensions Required-by: PennyLane-qiskit, PennyLane_Lightning, PennyLane_Lightning_GPU Platform info: Linux-6.8.0-47-generic-x86_64-with-glibc2.39 Python version: 3.12.3 Numpy version: 1.26.4 Scipy version: 1.11.4 Installed devices: - default.clifford (PennyLane-0.38.0) - default.gaussian (PennyLane-0.38.0) - default.mixed (PennyLane-0.38.0) - default.qubit (PennyLane-0.38.0) - default.qubit.autograd (PennyLane-0.38.0) - default.qubit.jax (PennyLane-0.38.0) - default.qubit.legacy (PennyLane-0.38.0) - default.qubit.tf (PennyLane-0.38.0) - default.qubit.torch (PennyLane-0.38.0) - default.qutrit (PennyLane-0.38.0) - default.qutrit.mixed (PennyLane-0.38.0) - default.tensor (PennyLane-0.38.0) - null.qubit (PennyLane-0.38.0) - lightning.gpu (PennyLane_Lightning_GPU-0.38.0) - qiskit.aer (PennyLane-qiskit-0.38.0) - qiskit.basicaer (PennyLane-qiskit-0.38.0) - qiskit.basicsim (PennyLane-qiskit-0.38.0) - qiskit.remote (PennyLane-qiskit-0.38.0) - lightning.qubit (PennyLane_Lightning-0.38.0) None

My guess is that this type of embedding in PennyLane only works for initializing circuits.

In my case, I implemented the amplitude encoding myself, and it worked.
I think you should do the same because you will encounter many more problems, and reimplementing it yourself will offer more customization.

Hi @p-o-lo, welcome to the Forum!

Would you be able to share a minimal reproducible example? Basically a code that we can copy-paste to try to replicate the issue.

@Bank_Patamawisut , I’m sorry to hear that you had to reimplement this yourself. Is there anything specific that you changed with respect to PennyLane’s implementation? Or did you start from scratch? This can help us learn how to improve this feature.

Thank you for your reply.
I implemented the Amplitude Embedding using the most straightforward approach, using controlled phase gates for all possible states, but this might not be the most optimal method. What I think PennyLane could improve on is enabling the conversion of qml.AmplitudeEmbedding into a set of gates or allowing it to be applied mid-circuit (as there is currently a broadcasting issue). This could make it easier to apply in many applications.

Thanks for the feedback @Bank_Patamawisut !

We’ll note it down for future improvements. The issue might be complex since it arises from MottonenStatePreparation, but we’ll see what we can do. If you have any code of what worked for you and what didn’t, we could take it into account for a possible future implementation.

Regarding your original question on qml.transforms.broadcast_expand, here’s a code example on how it can be used. This might help you too @p-o-lo !

import pennylane as qml
from pennylane import numpy as pnp

# Create your device
dev = qml.device('default.qubit', wires=2)

# Create your QNode
@qml.qnode(dev)
def circuit(f=None):
    qml.AmplitudeEmbedding(features=f, wires=range(2))
    return qml.expval(qml.Z(0))

# Use a transform to allow broadcasting
expanded_circuit = qml.transforms.broadcast_expand(circuit)

# Set an initial value for your parameters
f = pnp.array([1/2, 1/2, 1/2, 1/2])

# Create data in another dimension (to test broadcasting)
x = qml.math.stack((f,f))

# Print your data, circuit, and output
print('x \n',x)
print('Expanded circuit \n',qml.draw(expanded_circuit)(x))
print('Expanded circuit output \n',expanded_circuit(x))

# Create a cost function which returns a scalar
def cost(params):
  return qml.math.sum(expanded_circuit(params))

# Create an optimizer
opt = qml.AdamOptimizer()

# One optimization iteration
print('New parameters after one iteration \n',opt.step(cost,x))

I think you are right. The amplitude embedding works only on a state initialized with zero qubits. I tried to perform it from scratch with the vector and matrix representation.

Actually, I am not sure if the data reuploading with amplitude encoding is meaningful. I try to explain. If we encode a set of data X in the amplitude of a quantum state |psi>, after we perform some transformation on the state |psi> with some quantum gate U(theta). If I want to re-upload the data with amplitude encoding a second time in the circuit, I have to undo the gate U(theta) by applying his conjugate. By doing so, I am essentially coming back to the previous state without any modification to the circuit.
How did you implement the reuploading with amplitude embedding, and did you get some improvements on your training? Thank you for the help

I’m not really sure, as in my case, I was not really using a data reuploading technique. I was combining amplitude embedding with QSVT, but I simplified my question to reuploading.

Data reuploading, from my understanding, it’s not necessary to do U adjoint, isn’t it? You can just reapply the amplitude embedding gates again without needing to revert the previous transformation.

From my knowledge the data reuploading is the reupload of the same input data multiple times in the circuit. Let’s start with a set of 0 qubits |00>. You can apply the amplitude embedding gate to upload a set of data X = {x0,x1,x2,x3}. So your transformation is U|00> = |psi> = x0|00> + x1|01> + x2|10> + x3|11>. Now, in the circuit, you have the state |psi> used to encode your input data X. Now you can apply any kind of parametrized circuit on |psy> let’s say U1(theta). By doing so you get a new state U1(theta)|psi> = |psi1(theta)>. After this if you want to reupload your data X you cannot simply reapply the first gate U because your state now is |psi1>. In other words, if you do U|psi1>, you won’t get the previous |psi> = x0|00> + x1|01> + x2|10> + x3|11> used to encode the data. So, if you really want to re-upload the data, we need to undo the U1 first. I don’t know if it makes sense to reupload the data with amplitude embedding.

I see your point, but maybe we just think about amplitude embedding differently. I see it as an operator U(x) that changes ∣0⟩ into the state we want, ∣ψ⟩. But if I apply it to a state ∣ϕ⟩, I wouldn’t expect it to turn∣ϕ⟩ into ∣ψ⟩. In reuploading, the encoding can be any operator that represents the data, so I think the amplitude embedding matrix U(x) is ok.

1 Like