Parallellization issues

Hi,
I am trying to test an algorithm of mine using random set of parameters. Now, I wish to implement this procedure with various noise models, and hence looked into using the Qiskit-pennylane extension to do so. The procedure is embarrasingly parallell where I essentially perform the entire procedure multiple times using various initial parameter values. I have done multiple such simulations already in parallell using the built in pennylane devices such as default.qubit and default.torch. However when switching to the qiskit plugin, I get errors. I am wondering if there are any workarounds to the problem since running these simulations sequentially is significantly slower than doing so in parallell.

I have attached a very small example that reproduces the issue.

import pennylane as qml
from pennylane.qnode import qnode
from pennylane import numpy as np

from joblib import Parallel, delayed

parameters = np.array([[1.2,2.4],[1.2,4.5]])

def circuit(parameters,i):
    param = parameters[i]
    qml.RX(param[0],wires = 1)
    qml.RY(param[1],wires = 0)
    return qml.sample(op = None,wires = [0,1])

parameters = np.array([[1.4,3.6],[5.6,1.6]])
dev = qml.device('qiskit.aer', wires=2, shots = 10000)

samplecircuit = qml.QNode(circuit,dev)

results = Parallel(n_jobs=2)(delayed(samplecircuit)(parameters,i) for i in range(2))

Changing the device to default.qubit runs the circuit as expected without any issues. The error message that I recieve is the following:

joblib.externals.loky.process_executor._RemoteTraceback:
“”"
Traceback (most recent call last):
File “/opt/anaconda3/lib/python3.9/site-packages/joblib/externals/loky/process_executor.py”, line 407, in _process_worker
call_item = call_queue.get(block=True, timeout=timeout)
File “/opt/anaconda3/lib/python3.9/multiprocessing/queues.py”, line 122, in get
return _ForkingPickler.loads(res)
File “/opt/anaconda3/lib/python3.9/site-packages/qiskit/init.py”, line 89, in getattr
if not self.aer:
File “/opt/anaconda3/lib/python3.9/site-packages/qiskit/init.py”, line 89, in getattr
if not self.aer:
File “/opt/anaconda3/lib/python3.9/site-packages/qiskit/init.py”, line 89, in getattr
if not self.aer:
[Previous line repeated 988 more times]
RecursionError: maximum recursion depth exceeded
“”"

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File “/Users/viroshaanuthayamoorthy/Documents/MastersSimulations/Variational-Circuits-and-Neural-Networks/AdressingSlowTraining.py”, line 23, in
results = Parallel(n_jobs=2)(delayed(samplecircuit)(parameters,i) for i in range(2))
File “/opt/anaconda3/lib/python3.9/site-packages/joblib/parallel.py”, line 1056, in call
self.retrieve()
File “/opt/anaconda3/lib/python3.9/site-packages/joblib/parallel.py”, line 935, in retrieve
self._output.extend(job.get(timeout=self.timeout))
File “/opt/anaconda3/lib/python3.9/site-packages/joblib/_parallel_backends.py”, line 542, in wrap_future_result
return future.result(timeout=timeout)
File “/opt/anaconda3/lib/python3.9/concurrent/futures/_base.py”, line 446, in result
return self.__get_result()
File “/opt/anaconda3/lib/python3.9/concurrent/futures/_base.py”, line 391, in __get_result
raise self._exception
joblib.externals.loky.process_executor.BrokenProcessPool: A task has failed to un-serialize. Please ensure that the arguments of the function are all picklable.

Alternatively, are there noise models that simulate errors on quantum hardware that I can use instead of the Qiskit plugin that has similar backend to what the default simulators have? I cannot see any immediate workaround of this issue as I reckon that there can only be a single instance of the “dev” when using qiskit as the backend.

Hi @Viro,

You could try using the ‘default.mixed’ device in our demo on noisy circuits you can learn more about this.

I also recommend our demo on error mitigation where you can see how to use a fake IBM backend and can use the noise model from that backend.

I would recommend first running the demos as they are, to make sure that they run well on your machine. Then you can try modifying your code or the demos.

Please let us know if this helps solve your problem!

Yep, I have tried all the demos and they do work. However when I try to parallellize it using joblib I get errors.

Alternatively, is it possible to create a noisemodel in the default.mixed simulator that is similar to the device-backends from Qiskit?

Hi @Viro,

I now understand your problem. We have a blog post on how to parallelize qnode execution.

In this example we use Dask and the PennyLane-Qiskit plugin. However this is based on the fact that all QNodes are simultaneously evaluated with the same parameters. So it’s not exactly what you need.

Another option may be to use Jax. The discussion in this forum thread includes some code by mlxd that you may want to try.

Please let me know if this helps!

Hmm, this is not exactly what I am looking for unfortunately. Getting back to the problem of noise instead, say I wanted to use the default.mixed simulator to simulate noise that is realistic. Is it possible to implement noise similar to say ibmq_melbourne using only the kraus operators and calibration data found in ibmq, or is the Qiskit-way of implementing noise fundamentally different from how default.mixed would simulate noise? If no, how could I go about implementing some of the noise-operations that are present in Qiskit Devices using default.mixed?

Thanks in advance

Hi @Viro,

You can build your own custom noise channel using the operation QubitChannel by specifying its Kraus operators.

Here’s an example of how you can use this:

import pennylane as qml
from pennylane import numpy as np

dev = qml.device('default.mixed', wires=1)

@qml.qnode(dev)
def bitphaseflip_circuit(p):
    K0 = np.sqrt(1-p)*np.eye(2)
    K1 = np.sqrt(p)*np.array([[0,1j],[1j,0]])
    qml.QubitChannel([K0, K1], wires=0)
    return qml.expval(qml.PauliZ(0))

In your case you simply need to replace K0 and K1 for the Kraus operators you get from Qiskit.

Please let me know if this helps!