I’ve been using PennyLane for QNN simulations. Library itself is brilliant and very helpful.
However, It seems it takes too long to simulate on my hardware so I am looking for a hardware upgrade.
I am simulating about 8(sometimes 16)qubits with multiple layers with several hundred
parameters which takes me about couple days per desired result of QNN. I’ve tried lightning.qubit but it took couple times longer than default.qubit somehow.(maybe b/o overheads)
I checked few answers from this forum regarding lighting qubits and amazon aws page. but it only indicates qubit number as complexity factor, no results from personal GPU(like RTX series). and it also says it is only helpful to circuit over approx. 20 qubits.
So if I am looking for a faster simulation should I upgrade CPUs and try multithreading instead of adding Graphic cards?

@mlxd would know best here, but could you share more details on the code you’re trying to run? Attaching it to this thread would help, but if you can’t do so due out of privacy, what is the system you’re trying to simulate? Are you using adjoint differentiation?

First, in regard of adjoint differentiation,
I’ve never used method mentioned here - Page not found — PennyLane . Although the method seems useful, My circuit consists some measurement - feed forward schemes as QCNN which can’t be expressed as Unitary operation.

And to provide more details, My current circuits are little modified version of QCNN - Quantum Convolutional Neural Networks — PennyLane documentation. My code performs trainings while computing accuracy and loss each iteration, and computes Fisherinformation on the circuits. Currently, They are 8 qubit circuit but can potentially expended to 16qubit. Circuits have 50-200 parameters depending on circuit depth(layer).

Since I am trying to case study each variation of the circuit, to get a statistically meaningful data, it sometimes takes couple weeks to compute. Although I have to admit my deficiency in using PennyLane is one of the reason, I want to have a quick speed up via hardware upgrade. any recommendation would be appreciated.

It would be useful for us if you could share a minimal version of your code. It looks to me like you have a huge bottleneck somewhere.

In the following code for instance you see how to set the differentiation method to “adjoint” And you also see that lightning.qubit is faster than default.qubit.

import pennylane as qml
from timeit import default_timer as timer
# Choose number of qubits (wires) and circuit layers
wires = 16
layers = 3
# Set number of runs for timing averaging
num_runs = 5
# Instantiate the default.qubit or lightning.qubit device
dev_default = qml.device('default.qubit', wires=wires)
dev_lightning = qml.device('lightning.qubit', wires=wires)
# Create QNode of default.qubit and circuit
@qml.qnode(dev_default, diff_method="adjoint")
def circuit_default(parameters):
qml.StronglyEntanglingLayers(weights=parameters, wires=range(wires))
return [qml.expval(qml.PauliZ(i)) for i in range(wires)]
# Create QNode of lightning.qubit and circuit
@qml.qnode(dev_lightning, diff_method="adjoint")
def circuit_lightning(parameters):
qml.StronglyEntanglingLayers(weights=parameters, wires=range(wires))
return [qml.expval(qml.PauliZ(i)) for i in range(wires)]
# Set trainable parameters for calculating circuit Jacobian
shape = qml.StronglyEntanglingLayers.shape(n_layers=layers, n_wires=wires)
weights = qml.numpy.random.random(size=shape)
# Run, calculate the quantum circuit Jacobian and average the timing results
timing = []
for t in range(num_runs):
start = timer()
jac = qml.jacobian(circuit_default)(weights)
end = timer()
timing.append(end - start)
print('default: ',qml.numpy.mean(timing))
# Run, calculate the quantum circuit Jacobian and average the timing results
timing = []
for t in range(num_runs):
start = timer()
jac = qml.jacobian(circuit_lightning)(weights)
end = timer()
timing.append(end - start)
print('lightning: ',qml.numpy.mean(timing))

The original code is taken from this blog post and modified to compare lightning.qubit and default.qubit.