Can someone please help me understand the difference between ‘lightning.gpu’ vs ‘default.qubit.torch’ devices?
I understand that ‘lightning.gpu’ helps perform quantum simulations on GPU. But, can one execute a hybrid quantum NN on GPU using this? I know that one can execute a hybrid model on GPU using ‘default.qubit.torch’ device.
PyTorch is an interface that we support in PennyLane. Interfaces are the classical libraries that you can easily use together with PennyLane. PennyLane makes each of these libraries quantum-aware, allowing quantum circuits to be treated just like any other operation. You can learn more about interfaces in the PennyLane documentation!
So as a summary, if you want to run on a GPU with CUDA >=11.5, generation SM 7.0 (Volta) and greater, on a Linux machine, and you plan to use over 20 qubits, then lightning.gpu is the way to go. Otherwise it’s better to stick to CPUs, using devices such as lightning.qubit, which you can interface with classical machine learning libraries such as PyTorch.
Please let me know if this helps answer your question!