Quantum transfer learning code (Mari et al., 2019) - IBMQDevice endless execution

I am curious to understand why does the IBMQDevice keep executing the code endlessly?

I am referring to the Quantum transfer learning code (ants vs. bees classification) from (Mari et al., 2019 Link:https://arxiv.org/pdf/1912.08278.pdf
code link: https://pennylane.ai/qml/app/tutorial_quantum_transfer_learning.html).

Though I specify the number_of_epochs = 1 and the shots=1 for ‘ibmqx2’ device, the execution seems endless on the quantum hardware.

Plus the fact that I am unable to track the training progress on my system (this only worked with the simulator).

Am I missing some code segment to get back the results from the quantum hardware?

At present I only replaced the ‘default.qubit’ device to the actual ibm machine i.e.

From, p_device = qml.device(“default.qubit”, wires=n_qubits)

To, p_device = IBMQDevice(wires=n_qubits, backend=‘ibmqx2’, shots=1)

I am a novice in quantum computing, please pardon my query. I assumed 1 shot = 1 run on the quantum hardware.

On checking the IBMQ account, it displays the number of shots =1 correctly yet it seems to be executing for a lot of runs i.e. I could see 415 results (with status: COMPLETED) displayed for that particular job.

Does that have any connection with the stability of the actual quantum device? Or is the learning rate and the decay in the learning rate responsible for this?

Further details:

The following piece of code seems to execute endlessly

model_hybrid = train_model(
model_hybrid, loss_function, optimizer_hybrid, exp_lr_scheduler, num_epochs=num_epochs
)

Could you please help me understand the same?

Thank you.

Hi @angelinaG, thanks for your post!
Using a real device can take a long time especially if the device is busy (long queues of IBM users) so, probably, what you are reporting is normal. However I would like to give you some tips which could be useful:

  1. In our paper we first trained the model with a simulator and then we only executed it (with fixed parameters) on a real device. This is quite easier with respect to training. This is the code that we used: https://github.com/XanaduAI/quantum-transfer-learning/tree/master/quantum_processors
  2. You could first try to use the IBM could simulator, just to check if everything runs smoothly with your settings of the IBM plugin. If I remember correctly, this can be done by just replacing the keyword ibmqx2 with qasm_simulator.
  3. I think that it is normal that you see many jobs even if shots=1. Even in this case, the number of jobs is at least equal to the number of expectation values that you need to compute. If you classify many input images or if you train many parameters, you need to evaluate many expectation values. So this looks normal.
  4. You linked the transfer learning tutorial however, if you are interested in reproducing the results of the paper, you may find more useful the actual quantum transfer learning repository: https://github.com/XanaduAI/quantum-transfer-learning
1 Like

Thank you so much Dr. Andrea Mari for this explanation.
The link https://github.com/XanaduAI/quantum-transfer-learning/tree/master/quantum_processors is what I was looking for.
I shall try that out.
Also, I missed mentioning that I was able to reproduce your results using the simulator on PennyLane.
Since I was trying to train the model on the quantum hardware it seemed like an endless execution. I understand that I can use the saved weights obtained by training on the simulator which will save time.

I also wish to acknowledge Dr. Maria Schuld for her timely inputs and direction to reach out to you! I shall keep you posted with the results of the experiment.

Thank you once again!

I was able to successfully execute the code on the IBMQ machine.

1 Like