Pennylane-Qiskit Number of Jobs?

Hello!

I’ve been using Pennylane to develop hybrid Tensorflow models. Everything works great when I run it locally, but I run into issues when I try and set my device to the ‘ibmq_qasm_simulator’.

On IBM Quantum Dashboard, I can see jobs coming in and being completed, and my software runs perfectly. However, I was wondering if there was away I could approximate the number of jobs / amount of time my code will take to be run on their simulator.

Something like a ETA for completing the model.fit would be nice.

Thanks, and feel free to ask me any questions!

@qml.qnode(dev)
def qnode(inputs, weights):
    qml.AngleEmbedding(inputs, wires=range(n_qubits))
    qml.BasicEntanglerLayers(weights, wires=range(n_qubits))
    return [qml.expval(qml.PauliZ(wires=i)) for i in range(n_qubits)]

is my qnode, run locally w/ lightning.qubits, wires=2

n_layers = 1
weight_shapes = {"weights": (n_layers, n_qubits)}
qnode2 = qml.QNode(qnode, dev)
qlayer = qml.qnn.KerasLayer(qnode2, weight_shapes, output_dim=n_qubits, dtype='float64')

clayer_1 = tf.keras.layers.Dense(50, input_shape=(43,), activation='relu')
clayer_3 = tf.keras.layers.Dense(2 ,activation='relu')
clayer_4 = tf.keras.layers.Dense(1)
model = tf.keras.models.Sequential([clayer_1, clayer_3, qlayer, clayer_4])

It’s not an ideal network for my use case, but I’m trying to familiarize myself with the tools before I start going deeper into building better / more advanced networks.

Takes abt 20min/epoch locally, w/ batch_size = 128. Is there anyway I can use this to figure out how many jobs it will take to run on IBM Qasm Simulator backend?

I’m still in high school, so apologies if I have misunderstood any of the techniques.

After trying to run the code over night, after 40k shots I ran out of available jobs to use on the ibmq simulator. Is there anyway I can set the number of shots/jobs to prevent it from going over my limit?

Hi @Raghav_Ramki, welcome to the Forum!

I’m sorry that you ran out of shots. You may be able to contact someone at IBM through the Qiskit Slack to see if they can give you some extra shots.

It seems that the problem that you’re trying to run is very big. You can in fact set the number of shots when you define your device. You can use something like the following code:

b = "ibmq_qasm_simulator"

dev = qml.device(
    "qiskit.ibmq",
    wires=2,
    backend=b,
    shots = 1
)

Notice that this will use one shot every time you run your circuit. Since you will run your circuit many times this will take many shots. The exact number of shots will depend on your exact dataset, how you encode the information in the circuit, etc.

Batches are not working well with our PennyLane-Qiskit plugin at the moment. This is on our radar for fixing. If you want to learn more about this you can take a look at this forum post about Torch Layer where someone reported the problem.

So if your dataset has about 200 datapoints then you will need about 200 circuit executions/epoch, and 200 shots/epoch if you have set shots=1 and if you’re using one datapoint at a time.

Please let me know if this helps or if you have any additional questions.

Awesome! This is working great for me.

Hello!

I ran into a few more problems while trying this solution.
I am using 1024 shots per execution, and I only have 36 units over which I am training.

When run on a local simulator, I go from 1/36 to sample 36/36 in ~30 minutes.
I assumed that when run on an actual piece of quantum hardware, it would take only 36 jobs, per your previous answer.

Instead, my model is going well above that. Is this due to my large batch size? Or because it is part of a larger neural network, that I assumed was being trained locally?

Hi @Raghav_Ramki ,

Yes, unfortunately batching circuits on IBM hardware doesn’t work well. If you have classical processing (eg. a classical neural network) this is done locally, unless you’re using Qiskit Runtime.

My suggestion would be to reduce the size of your dataset and not use batches if you want to run on IBM hardware. Please let me know if this works for you!