IBMQJobManager to calculate gradients?

I’m currently trying to get a QML model trained on IBMQ in a reasonable time.

I’ve just discovered that IBMQ has an IBMQJobManager ( And that I could submit anywhere between 75-900 circuits in a single job (depending on the hardware).

I believe this would be straight forward to implement in the forward pass, but I’m not sure how to utilize this when calculating gradients (where most of the speed up would come from!). I’m wondering if anyone can point me to where in the PennyLane library I should look at to implement something like this? (Ex: Where are executions/jobs called when calculating gradients?)

Unless PennyLane already submits multiple circuits per job, but I couldn’t find this anywhere.


Hey @Jerry2001Qu, that is an interesting idea! In the development branch of PennyLane, we have just added a batch_execute() pipeline, which gives the potential to run multiple circuits in parallel/asynchronously and is used during gradient calculations. However, this approach needs a supporting method added to the corresponding device. It is not yet available for the PennyLane-Qiskit plugin devices, but is something we’ll look to add going forward.

If you’re interested in contributing, this could be a nice addition!

1 Like


I’ve gotten batch_execute() working w/ IBMQDevice (although it’s spaghetti code currently!). I can now execute 900 circuits per job in the backward pass.

Do you know if there’s a way to run multiple circuits in parallel during the forward pass? When I call the q_net

@qml.qnode(dev, interface='torch')
def q_net(q_in, q_weights_flat):

I’ve been sending in 1 element (q_in) at a time, is there support for sending in batches which would route executions to batch_execute, instead of execute?

That’s great @Jerry2001Qu! Feel free to turn this into a PR in the PL-Qiskit repo and we’d be happy to take a look.

Do you know if there’s a way to run multiple circuits in parallel during the forward pass? When I call the q_net

Currently there is not a clear route for batching during the forward pass. This is something that we want to think carefully about from a design perspective. We have a few things in mind, but still awaiting implementation - for now we decided to go with batch_execute() for gradients as it typically will have the greatest impact.

1 Like