Difference between 'lightning.gpu' vs 'default.qubit.torch' devices

Hello,

Can someone please help me understand the difference between ‘lightning.gpu’ vs ‘default.qubit.torch’ devices?

I understand that ‘lightning.gpu’ helps perform quantum simulations on GPU. But, can one execute a hybrid quantum NN on GPU using this? I know that one can execute a hybrid model on GPU using ‘default.qubit.torch’ device.

Thanks

Hi @imakash!

PyTorch is an interface that we support in PennyLane. Interfaces are the classical libraries that you can easily use together with PennyLane. PennyLane makes each of these libraries quantum-aware, allowing quantum circuits to be treated just like any other operation. You can learn more about interfaces in the PennyLane documentation!

However, notice that interfaces are not the same as devices. Devices are the hardware and simulators where your circuits will run. You can learn more about devices in the PennyLane documentation, or on this YouTube video.

If you want to learn more about lightning.gpu specifically, you can go to this PennyLane blog post or the Lightning-GPU documentation. Remember that this library works together with NVIDIA cuQuantum SDK, and it will only run on a CUDA capable GPU of generation SM 7.0 (Volta) and greater.

So as a summary, if you want to run on a GPU with CUDA >=11.5, generation SM 7.0 (Volta) and greater, on a Linux machine, and you plan to use over 20 qubits, then lightning.gpu is the way to go. Otherwise it’s better to stick to CPUs, using devices such as lightning.qubit, which you can interface with classical machine learning libraries such as PyTorch.

Please let me know if this helps answer your question!

1 Like