Gradient descent creates job too large for hardware

Hello! I’m testing a quantum-classical classifier that uses a parameterized circuit on IBMQ hardware. I was under the impression that my code was only sending small circuits at a time to IBM, but when I I use the pennylane adam optimizer with my cost function (set up to send one circuit at a time) it sends a large job (3600 circuits). Is there a way I can split this job up without having to modify the pennylane source code?

My error message:

Traceback (most recent call last):
File “/home/plbrown5/”, line 450, in
params, _, , w, = opt.step(cost, params, Xbatch, ybatch, w, state_labels)
File “/home/plbrown5/.conda/envs/qml/lib/python3.9/site-packages/pennylane/optimize/”, line 129, in step
g, _ = self.compute_grad(objective_fn, args, kwargs, grad_fn=grad_fn)
File “/home/plbrown5/.conda/envs/qml/lib/python3.9/site-packages/pennylane/optimize/”, line 158, in compute_grad
grad = g(*args, **kwargs)
File “/home/plbrown5/.conda/envs/qml/lib/python3.9/site-packages/pennylane/”, line 113, in call
grad_value, ans = grad_fn(*args, **kwargs)
File “/home/plbrown5/.conda/envs/qml/lib/python3.9/site-packages/autograd/”, line 20, in nary_f
return unary_operator(unary_f, x, *nary_op_args, **nary_op_kwargs)
File “/home/plbrown5/.conda/envs/qml/lib/python3.9/site-packages/pennylane/”, line 139, in _grad_with_forward
grad_value = vjp(vspace(ans).ones())
File “/home/plbrown5/.conda/envs/qml/lib/python3.9/site-packages/autograd/”, line 14, in vjp
def vjp(g): return backward_pass(g, end_node)
File “/home/plbrown5/.conda/envs/qml/lib/python3.9/site-packages/autograd/”, line 21, in backward_pass
ingrads = node.vjp(outgrad[0])
File “/home/plbrown5/.conda/envs/qml/lib/python3.9/site-packages/autograd/”, line 67, in
return lambda g: (vjp(g),)
File “/home/plbrown5/.conda/envs/qml/lib/python3.9/site-packages/pennylane/interfaces/batch/”, line 196, in grad_fn
vjps = processing_fn(execute_fn(vjp_tapes)[0])
File “/home/plbrown5/.conda/envs/qml/lib/python3.9/site-packages/pennylane/interfaces/batch/”, line 173, in wrapper
res = fn(execution_tapes.values(), **kwargs)
File “/home/plbrown5/.conda/envs/qml/lib/python3.9/site-packages/pennylane/interfaces/batch/”, line 125, in fn
return original_fn(tapes, **kwargs)
File “/home/plbrown5/.conda/envs/qml/lib/python3.9/”, line 79, in inner
return func(*args, **kwds)
File “/home/plbrown5/.conda/envs/qml/lib/python3.9/site-packages/pennylane_qiskit/”, line 78, in batch_execute
res = super().batch_execute(circuits)
File “/home/plbrown5/.conda/envs/qml/lib/python3.9/site-packages/pennylane_qiskit/”, line 429, in batch_execute
result = self._current_job.result()
File “/home/plbrown5/.conda/envs/qml/lib/python3.9/site-packages/qiskit/providers/ibmq/job/”, line 290, in result
raise IBMQJobFailureError(
qiskit.providers.ibmq.job.exceptions.IBMQJobFailureError: ‘Unable to retrieve result for job 6260fedfd0d73f7cc6baef08. Job has failed: The number of experiments in the Qobj (3600) is higher than the number of experiments supported by the device (100). Error code: 1102.’

Hi @Payden_Brown! Welcome :slight_smile: It is hard to tell without seeing your code, but this could be due to the number of parameters in your circuit.

For example, when using the parameter-shift rule to compute quantum gradients, 2N circuits are generated, where N is the number of parameters in your circuit.

Would you be able to share a minimal version of your code that produces this error?

Thank you for your response! Sorry my code isn’t showing up very nice. But yeah, that makes sense because my dataset samples have 27 features, and the way that I have my code there should be 3600 parameters to optimize. Which is the same number of circuits in the error message.


def qcircuit(params, x, y, w):
    for p in range(len(params)):
        x = np.multiply(x,w[p])
        i = 0
        while i < len(x):
             qml.Rot(*x[i:i+3], wires=0)

        qml.Rot(*params[p], wires=0)

    return qml.expval(qml.Hermitian(y, wires=[0]))

def cost(params, x, y, w, state_labels=None):
    loss = 0.0
    dm_labels = [density_matrix(s) for s in state_labels]
    for i in range(len(x)):
         f = qcircuit(params, x[i], dm_labels[y[i]], w)
         loss = loss + (1 - f) ** 2

    return loss / len(x)

def iterate_minibatches(inputs, targets, batch_size):
     for start_idx in range(0, inputs.shape[0] - batch_size + 1, batch_size):
         idxs = slice(start_idx, start_idx + batch_size)
         yield inputs[idxs], targets[idxs]

num_layers = 12
learning_rate = 0.02
epochs = 30
batch_size = 50

opt = AdamOptimizer(learning_rate, beta1=0.9, beta2=0.999)

label_0 = [[1], [0]]
label_1 = [[0], [1]]
state_labels = np.array([label_0, label_1], requires_grad=False)

w = np.array(np.random.uniform(size=(num_layers, len(X[0])), requires_grad=True))
params = np.array(np.random.uniform(size=(num_layers, 3), requires_grad=True))

for it in range(epochs):
    for Xbatch, ybatch in iterate_minibatches(X_train, y_train, batch_size=batch_size):
         params, _,  _, w,_  = opt.step(cost, params, Xbatch, ybatch, w, state_labels)

Hi @Payden_Brown, unfortunately the only two solutions that I can see to your problem are:

1 - Reduce the number of parameters
2 - Use a simulator instead of actual hardware

Option 1 will require that you largely reduce the number of features or that you change your ansatz (or both).

Option 2 might be a good idea if you don’t specifically need to use hardware. One of our fastest simulators is lightning.qubit and you can use it with other differentiation methods that are faster than parameter-shift. You could for instance use backpropagation or the adjoint method. You can call them in the definition of your qnode like this: @qml.qnode(dev, diff_method='adjoint') or @qml.qnode(dev, diff_method='backprop').

The fastest combination should be lightning.qubit and adjoint, but I recommend that you go through the demo on backpropagation and the demo on the adjoint method to get a better idea of how they work.

I hope this was helpful. Please let me know if you have any further questions.