Error "Broadcasting with MottonenStatePreparation is not supported." when using AmplitudeEmbedding after other operators

I tried to do data re-uploading technique by using AmplitudeEmbedding

# Define quantum circuit
@qml.qnode(dev, interface='torch')
def quantum_layer(inputs, weights):
    # Encoding
    state = inputs_to_state(inputs)
    for i in range(weights.shape[0]):
        qml.AmplitudeEmbedding(state, wires=range(num_qubits))
        qml.StronglyEntanglingLayers(weights=weights[i].unsqueeze(0), wires=range(num_qubits))
    return qml.expval(qml.PauliZ(0))

class QuantumModel(Module):
    def __init__(self, reps=1):
        super().__init__()
        self.qlayer = qml.qnn.TorchLayer(quantum_layer, 
                                         weight_shapes={'weights': (reps, num_qubits, 3)},
                                         init_method = {
                                            "weights": torch.nn.init.normal_,
                                        })

    def forward(self, x):
        x = torch.flatten(x, start_dim=-3)
        x = self.qlayer(x)
        x = (x + 1)/2
        return x

This error occurs when I tried run it with reps more than 1.

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
Cell In[197], line 1
----> 1 output = model(input_data)
      2 output

File \venv\lib\site-packages\torch\nn\modules\module.py:1553, in Module._wrapped_call_impl(self, *args, **kwargs)
   1551     return self._compiled_call_impl(*args, **kwargs)  # type: ignore[misc]
   1552 else:
-> 1553     return self._call_impl(*args, **kwargs)

File \venv\lib\site-packages\torch\nn\modules\module.py:1562, in Module._call_impl(self, *args, **kwargs)
   1557 # If we don't have any hooks, we want to skip the rest of the logic in
   1558 # this function, and just call forward.
   1559 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
   1560         or _global_backward_pre_hooks or _global_backward_hooks
   1561         or _global_forward_hooks or _global_forward_pre_hooks):
-> 1562     return forward_call(*args, **kwargs)
   1564 try:
   1565     result = None

Cell In[194], line 21
     19 def forward(self, x):
     20     x = torch.flatten(x, start_dim=-3)
---> 21     x = self.qlayer(x)
...
    358     )
    360 a = qml.math.abs(state_vector)
    361 omega = qml.math.angle(state_vector)

ValueError: Broadcasting with MottonenStatePreparation is not supported. Please use the qml.transforms.broadcast_expand transform to use broadcasting with MottonenStatePreparation.

I also tried use the qml.transforms.broadcast_expand but im not sure if it’s correct way to do it and it’s very slow.
Thank you for the help.

Hi @Bank_Patamawisut ,

Welcome back to the Forum!

Could you please share a minimal reproducible example so that we can look for the root of the problem? A minimal reproducible example (or minimal working example) is the simplest version of the code that reproduces the problem. It should be self-contained, including all necessary imports, data, functions, etc., so that we can copy-paste the code and reproduce the problem. However it shouldn’t contain any unnecessary data, functions, …, for example gates and functions that can be removed to simplify the code.

If you’re not sure what this means then please make sure to check out this video.

Also please share the output of qml.about(). Thanks!