Hi, I am working on Hybrid (Torchlayer) QNN model. Everything works properly with default devices:
- default.gaussian (PennyLane-0.28.0)
- default.mixed (PennyLane-0.28.0)
- default.qubit (PennyLane-0.28.0)
- default.qubit.autograd (PennyLane-0.28.0)
- default.qubit.jax (PennyLane-0.28.0)
- default.qubit.tf (PennyLane-0.28.0)
- default.qubit.torch (PennyLane-0.28.0)
- default.qutrit (PennyLane-0.28.0)
However, it ran into error when I change device to:
- qiskit.aer (PennyLane-qiskit-0.28.0)
- qiskit.basicaer (PennyLane-qiskit-0.28.0)
- qiskit.ibmq (PennyLane-qiskit-0.28.0)
- qiskit.ibmq.circuit_runner (PennyLane-qiskit-0.28.0)
- qiskit.ibmq.sampler (PennyLane-qiskit-0.28.0)
- qulacs.simulator (PennyLane-qiskit-0.28.0)
- Xanadu cloud
I don’t use multiprocessing or tensorflow, but it keeps telling me that I have errors in these. I would appreciate any suggestions. Thanks
Snap shot of my QNN:
qubit1 = 7
qubit2 = 7
%# dev1 = qml.device(“default.qubit.torch”, wires=qubit1)
%# dev1 = qml.device(‘qiskit.ibmq’, wires=qubit1, backend=‘ibmq_qasm_simulator’, provider=provider)
dev1 = qml.device(“qiskit.aer”, wires=qubit1)
def circuit1(inputs, weights):
qml.AngleEmbedding(inputs, wires=range(qubit1))
qml.BasicEntanglerLayers(weights, wires=range(qubit1))
return [qml.expval(qml.PauliZ(wires=i)) for i in range(qubit1)]
qnode1 = qml.QNode(circuit1, dev1, interface=‘torch’)
%# dev2 = qml.device(“default.qubit.torch”, wires=qubit2)
%# dev2 = qml.device(‘qiskit.ibmq’, wires=qubit2, backend=‘ibmq_qasm_simulator’, provider=provider)
dev2 = qml.device(“qiskit.aer”, wires=qubit2)
def circuit2(inputs, weights):
qml.AngleEmbedding(inputs, wires=range(qubit2))
qml.BasicEntanglerLayers(weights, wires=range(qubit2))
return [qml.expval(qml.PauliZ(wires=i)) for i in range(qubit2)]
qnode2 = qml.QNode(circuit2, dev2, interface=‘torch’)
Errors
File “C:\ProgramData\Anaconda3\lib\site-packages\spyder_kernels\py3compat.py”, line 356, in compat_exec
exec(code, globals, locals)
File “c:\users\ango1\desktop\quantum reinforcement learning\rl_ev_experiment04_pennylane\code\hsa.py”, line 340, in
[w.start() for w in workers]
File “c:\users\ango1\desktop\quantum reinforcement learning\rl_ev_experiment04_pennylane\code\hsa.py”, line 340, in
[w.start() for w in workers]
File “C:\ProgramData\Anaconda3\lib\multiprocessing\process.py”, line 121, in start
self._popen = self._Popen(self)
File “C:\ProgramData\Anaconda3\lib\multiprocessing\context.py”, line 224, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File “C:\ProgramData\Anaconda3\lib\multiprocessing\context.py”, line 327, in _Popen
return Popen(process_obj)
File “C:\ProgramData\Anaconda3\lib\multiprocessing\popen_spawn_win32.py”, line 93, in init
reduction.dump(process_obj, to_child)
File “C:\ProgramData\Anaconda3\lib\multiprocessing\reduction.py”, line 60, in dump
ForkingPickler(file, protocol).dump(obj)
PicklingError: Can’t pickle <function param_shift at 0x000001FFE4130550>: it’s not the same object as pennylane.gradients.parameter_shift.param_shift
Traceback (most recent call last):
File “”, line 1, in
File “C:\ProgramData\Anaconda3\lib\multiprocessing\spawn.py”, line 116, in spawn_main
exitcode = _main(fd, parent_sentinel)
File “C:\ProgramData\Anaconda3\lib\multiprocessing\spawn.py”, line 126, in _main
self = reduction.pickle.load(from_parent)
EOFError: Ran out of input