For a larger model, the only simulator that runs properly is default.qubit. Other devices such as lightning.qubit, SV1 (Pennylane-Braket), and QSIM (PennyLane-Cirq) were implemented according to documentation, but typically experience delays in training updates and sometimes reach correct Epoch 1 training values, but not successful validation. In general, a long delay is experienced when interrupting execution of the run during training, unlike that experienced for default.qubit.
Just want to make sure I’m understanding — when you replace "default.qubit" with "lightning.qubit", SV1, or QSIM, the transfer learning demo (1) runs slower and (2) is more stochastic (random). Are you able to provide the output of qml.about() in the environment where you’re seeing this behaviour?
Awesome thanks! I was able to verify that the transfer learning demo is slower when using lightning. I’ll reach out to our performance team to see what’s going on here and will get back to you!
The reason this demo in particular runs much slower with lightning qubit is, oddly enough, because of the smaller qubit count. There is an overhead associated to setting up and running simulations with lightning. For problems with <10 qubits, it’s probably better to just use default qubit!
As far as the other devices go, it’s a similar story. For SV1 I think they recommend using it for anything with >25 qubits.