Cannot train model when get the embedding state from qiskit

I am training a MNIST classification hybrid model, there are some parts that I would like to obtain a state from qiskit and embed it into Pennylane’s circuit.

def data_prepration(d):
    for i,j in enumerate(d):
        r = j
        temp = get_state_fromqiskit(r*np.pi)
       #  temp = get_state_frompennylane(r*np.pi)

        temp2 = torch.tensor(temp)
    
        qml.AmplitudeEmbedding(temp2, wires=i)

Where

def get_state_pennylane(x):
    dev = qml.device("default.qubit", wires=1)

    @qml.qnode(dev)
    def circuit(x):
        qml.RY(x, wires=0)
        return qml.state()

    statevector = circuit(x)

    return statevector
def get_state_fromqiskit(x):

    qc = QuantumCircuit(1)
    x = float(x)
    qc.ry(x, 0)

    backend = Aer.get_backend('statevector_simulator')
    job = execute(qc, backend)
    result = job.result()

    statevector = result.get_statevector()
    statevector = np.array(statevector).tolist()
    return statevector

The code for obtaining the state above is just a simple example,When i use get_state_pennylane, it works normally, when i use get_state_fromqiskit, it’s not.Could you answer my question? Thanks!
qml.about():
Platform info: macOS-13.3-arm64-arm-64bit
Python version: 3.10.13
Numpy version: 1.23.5
Scipy version: 1.11.3
Installed devices:

  • qiskit.aer (PennyLane-qiskit-0.34.0)
  • qiskit.basicaer (PennyLane-qiskit-0.34.0)
  • qiskit.ibmq (PennyLane-qiskit-0.34.0)
  • qiskit.ibmq.circuit_runner (PennyLane-qiskit-0.34.0)
  • qiskit.ibmq.sampler (PennyLane-qiskit-0.34.0)
  • qiskit.remote (PennyLane-qiskit-0.34.0)
  • lightning.qubit (PennyLane-Lightning-0.34.0)
  • default.gaussian (PennyLane-0.34.0)
  • default.mixed (PennyLane-0.34.0)
  • default.qubit (PennyLane-0.34.0)
  • default.qubit.autograd (PennyLane-0.34.0)
  • default.qubit.jax (PennyLane-0.34.0)
  • default.qubit.legacy (PennyLane-0.34.0)
  • default.qubit.tf (PennyLane-0.34.0)
  • default.qubit.torch (PennyLane-0.34.0)
  • default.qutrit (PennyLane-0.34.0)
  • null.qubit (PennyLane-0.34.0)

Hey @Tianaaa, welcome to the forum! :sun_with_face:

If I understand your question correctly, I think you’re after qml.from_qiskit: qml.from_qiskit — PennyLane 0.34.0 documentation. It allows you to take qiskit circuits and turn them into usable elements in pennylane :slight_smile:. We also have some more pennylane-qiskit features and improvements coming with the next release (next week)!

Let me know if this helps :slight_smile:

Hi,@isaacdevlugt,thanks for your reply! :smile:
I have tried qml.from_qiskit, Here is my code:

def get_state_fromqiskit(x):

    qc = QuantumCircuit(1)
    x = float(x)
    qc.ry(x, [0])
    my_circuit = qml.from_qiskit(qc)
    dev = qml.device('default.qubit', wires=1)
    @qml.qnode(dev)
    def circuit():
        my_circuit(wires=[0])
        return qml.state()
    return circuit()

But the model still can’t be trained, the demo I modified is based on hybrid-quantum-classifier-in-pennylane-for-MNIST-dataset-based-on-inverse-MERA/pytorch_pennylane_hybrid_binary_classifier.ipynb at main · amirm343/hybrid-quantum-classifier-in-pennylane-for-MNIST-dataset-based-on-inverse-MERA · GitHub. I really have no idea where the problem is.

By the way, the use of Ry in the code I presented above is just a simple example. My actual requirement is to measure several entangled bits , obtain the collapsed state of the remaining one and embed it in the circuit.

Thanks! Let’s try to stick with the simple example that you wrote here and work our way towards your application :slight_smile:

Are you trying to modify x? Like that’s the variable you want to train and that’s it?

Thanks for your reply! :smiley:
The complete qnode(qlayer) in the initial demo concatenated two parts:

@qml.qnode(dev)
def inverce_MERA_layer(inputs, mera_parameters):

    data_prepration(inputs)

    inverce_MERA(mera_parameters)

    return qml.expval(qml.PauliZ(0))

inverce_MERA is a vqc which contains trainable variables, and data_prepration is just a data-embeddding circuit(the initial demo used Angle Embedding):

#state prepration template
def data_prepration(d):

    for i,j in enumerate(d):
        r = j/510
        qml.RY(r*np.pi, wires=i)
    
    qml.Barrier(wires=(len(d)-1,0))

I just want to modify this ,not directly using qml.RY to embed data , but using qiskit or pennylane-qiskit to obtain the embedded state of Ry() data and setting it as the initial state of the circuit with qml.AmplitudeEmbedding or qml.QubitStateVector .
But with all my efforts, I couldn’t do it successfully. No matter what I do, as long as I don’t use Pennylane(get_state_pennylanein my first post) to obtain this state, the loss won’t decrease :disappointed_relieved:

Hey! I’m confused :sweat_smile:. I think I just need to see your code, try and run it, replicate the behaviour, etc. If you could attach something minimal here that still reproduces what you’re seeing, that would be lovely :pray:! Sorry this is taking a while :sweat:

Sorry for replying so late, sorry again that i couldn’t give the minimal, but you can see this.
What I want to do is just not use pennylane Ry embedding directly, but obtain the state of equivalent Ry embedding using qiskit then use Pennylane AmplitudeEmbedding to take it as the initial state.
I think these two methods are the same.But the latter’s loss does not decrease.

Hey @Tianaaa,

It’s still not clear to me what you’re trying to accomplish :sweat_smile:. Maybe some of our new features will help?

It sounds like you’re trying to take a state from Qiskit, embed it into a PennyLane circuit as the initial state, and then train something.

This function should work for the first part (qiskit state → PennyLane initial state):

def get_state_fromqiskit(x):

    qc = QuantumCircuit(1)
    x = float(x)
    qc.ry(x, [0])
    my_circuit = qml.from_qiskit(qc)
    dev = qml.device('default.qubit', wires=1)
    @qml.qnode(dev)
    def circuit():
        my_circuit(wires=[0])
        return qml.state()
    return circuit()

But the latter’s loss does not decrease.

What are you trying to train? Can we work with a toy example?