Every step revisit qiskit backend

I am trying to rotate a qubit using the ibmq device.But it calculate so slow and about 3min per step. I check my IBMQ Experience result and found it seems like every step the optimization is performed, the qnode must revisit the device.

I am curious how to speed up the calculation and reduce the waiting time for the ibm device.

import pennylane as qml
from pennylane import numpy as np
from qiskit import IBMQ
from qiskit.providers.ibmq import least_busy

provider = IBMQ.load_account()

print("\n(IBMQ Backends)")
for backend in provider.backends():

least_busy_device = least_busy(provider.backends(simulator=False))

print(“All devices are currently unavailable.”)
lbd = str(least_busy_device)
print("Running on current least busy device: ", lbd)

dev1 = qml.device(“qiskit.ibmq”, wires=1, backend=lbd)

def circuit(params):
return qml.expval(qml.PauliZ(0))

def cost(var):
return circuit(var)

init_params = np.array([0.011,0.012])

opt = qml.GradientDescentOptimizer(stepsize=0.4)
steps = 100
params = init_params
for i in range(steps):
params = opt.step(cost, params)
if (i+1) % 5 == 0:
print(“Cost after step{:5d}:{:.7f}”.format(i+1, cost(params)))
print(“Optimizer rotation angles:{}”.format(params))

I delete my API_TOKEN in the code.
and the result show in IBMQ Experience

Time spent on one step.

In the end the spin quantum was not completed because it took too long time and did not seem to return an updated value.
Is my method of using the device wrong?
please help

Hi @Kyleyip,

As you’ve noticed, using IBM’s HW over the cloud means you will be sitting in the queue for every evaluation. This is a constraint with how IBM’s HW access is set up. If you have access to Rigetti’s hardware, you can reserve dedicated time on the QPU (no queue).

One common workflow is to first train on a simulator, then once the model is trained, use the hardware to evaluate the final real-world performance. It’s usually just a one-line switch in your code to go between HW and simulator.


Hi @nathan ,
Thank you for your suggestions ,I will try to follow the workflow.
I am wondering how to properly compare the performance of QML and classical machine learning. For example used to classify MNIST dataset.I care more about how to compare but not about comparison results.

Hi @Kyleyip

Are you asking about what steps to go through to compare, or about possible figures of merit you might use to compare the two approaches?

The steps are pretty simple: treat your ML algorithm as a black box, and just count the resources or performance of either the QML or classical ML models.

In terms of figures of merit, you could look at a number of things:

  1. training performance (cost function value on training data)
  2. generalization performance (cost function value on test data)
  3. number of steps needed for convergence
  4. number of resources needed in model to achieve same level or performance
  5. “wall time” to train model