Unable to use "qiskit.aer"

Since the default.mixed consumes too much memory, I intend to introduce noise using Qiskit. However, after changing default.qubit to qiskit.aer , it fails to run and an error occurs. Both default.qubit and default.mixed can run properly. The input data is a Tensor output from the nn.linear layer, and x.shape = [-1, 32] .

# code here
def encode(inputs, n_qubits, embed_type="amplitude"):
    start, end = n_qubits
    # setting embedding
    if "amplitude" == embed_type:
            qml.templates.AmplitudeEmbedding(inputs, wires=range(start, end))
    if "angle" == embed_type:
            qml.templates.AngleEmbedding(inputs, wires=range(start, end))

def qnn(n_qubits, layer):
    # dev = qml.device("default.mixed", wires=n_qubits+1)
    dev = qml.device("qiskit.aer", wires=n_qubits+1)
    # dev = qml.device("default.qubit", wires=n_qubits+1)

    def _qcnn11(weights,inputs):
        inputs = F.normalize(inputs, p=2, dim=1)
        encode(inputs, (0,6), embed_type="amplitude") #(0,6) qubit:0,1,2,...5
        
        ansatz(weights,layer) 
        
        results = []
        for i in range(n_qubits+1):
            results.append(qml.expval(qml.PauliX(i)))
        
        return results
    
    qlayer = qml.QNode(_qcnn11, dev, interface="torch") 
    weight_shapes = {"weights": 50*1}

    result = qml.qnn.TorchLayer(qlayer, weight_shapes)
    return result  

Error message below:

# error message here
/pennylane/templates/state_preparations/mottonen.py", line 354, in compute_decomposition
    raise ValueError(
ValueError: Broadcasting with MottonenStatePreparation is not supported. Please use the qml.transforms.broadcast_expand transform to use broadcasting with MottonenStatePreparation.

the output of qml.about().

Python version:          3.10.14
Numpy version:           1.26.0
Scipy version:           1.14.0
Installed devices:
- lightning.gpu (PennyLane_Lightning_GPU-0.37.0)
- lightning.qubit (PennyLane_Lightning-0.38.0)
- qiskit.aer (PennyLane-qiskit-0.40.1)
- qiskit.basicaer (PennyLane-qiskit-0.40.1)
- qiskit.basicsim (PennyLane-qiskit-0.40.1)
- qiskit.remote (PennyLane-qiskit-0.40.1)
- default.clifford (PennyLane-0.38.0)
- default.gaussian (PennyLane-0.38.0)
- default.mixed (PennyLane-0.38.0)
- default.qubit (PennyLane-0.38.0)
- default.qubit.autograd (PennyLane-0.38.0)
- default.qubit.jax (PennyLane-0.38.0)
- default.qubit.legacy (PennyLane-0.38.0)
- default.qubit.tf (PennyLane-0.38.0)
- default.qubit.torch (PennyLane-0.38.0)
- default.qutrit (PennyLane-0.38.0)
- default.qutrit.mixed (PennyLane-0.38.0)
- default.tensor (PennyLane-0.38.0)
- null.qubit (PennyLane-0.38.0)

Hi @SHAN ,

There may be a few things going on here.

  1. On one hand it looks like you have a mix of different versions for different devices. I’m not sure how that happened but the safest way to proceed is to create a new virtual environment where you pip install pennylane pennylane-qiskit
    Note that we will release a new version of PennyLane on Monday or Tuesday so you may want to wait until Wednesday to update so that you have the newest version installed.

  2. Once you have this you can test your code. If the problem persists then you can test using default.qubit with diff_method="parameter-shift". Let us know if you need help on how to set this. If this test fails then it means that the code is not “hardware-compatible”. You may need to change the way you’re handling your inputs or the way you’re using the embeddings (see point 3).

  3. Your issue seems related to the issue mentioned in this other thread. As mentioned in post #6 in that thread, using qml.transforms.broadcast_expand might help you.

Why don’t you give it a try and let us know if this works for you?

I hope this helps!

Hi @CatalinaAlbornoz

I’ve done a test, and the result is not very promising. I think I should wait for an update and give it another try.

Thanks a lot!

Let us know how it goes @SHAN !