I’m now evaluating the performance of quantum nerural networks in simulation.
However, the computiational time easily reachs to a few hours.
It is difficult to exploit the hyper parameter (e.g. num. of layers, qubits, gate type).
The condition is,
- Lap-top PC
- 2 qubits
- 12 variational parameters (Two StronglyEntanglingLayers)
- 2 dimentional input feature x 16384 data (corresponds to 1 epoch)
- loss func. : MSE
- diff_method = ‘adjoint’
- optimizer: Adam (from qml.optimizers)
The computational time is approximately 100 sec /epoch.
Therefore, in case of ~100 epochs, the total time becomes 2~3 hours.
I have tested a lot of ways.
qulacs.simulator is fast, but the diff_method should be paramer-shift. Threrefore, it is not fast in case of gradient-descent based learning.
To date, lightning.qubit would be fastest.
A fastest diff_method is adjoint. “Backprop” is not fast, at least, when the number of parameter is not so large.
How can I do for further speeding up?
“Using better machine as GPU cluster” is only solution?