Is GeForce RTX 3090 available for running lightning.gpu?

Is GeForce RTX 3090 available for running lightning.gpu?

Hi!
In my lab, we are going to buy a new gpu card.
One of the background is that the NVIDIA TITAN X we are currently using is no longer supported for lightning.gpu execution. As another questioner has mentioned, we also received the error message “WARNING: INSUFFICIENT SUPPORT DETECTED FOR GPU DEVICE WITH lightning.gpu”.

As far as we know,

  • Compute Capability of 7.0+.
  • Architecture Volta or Ampere

are listed as the requirements for the cuQuantumSDK.

We are considering installing a GeForce RTX 3090 as a GPU that meets the above requirements. I think we have probably made the right choice, but would like to hear your opinion on whether the GeForce RTX 3090 will meet the requirements of running the lightning.gpu simulator.

Thanks for reading this far.
(See CUDA-Enabled GeForce and TITAN Products for more details on GPU performance comparisons)

Hi @TM_MEME

Thanks for the question. You are correct that the Pascal-era cards are not supported with lightning.gpu and so anything in the SM7.0 and above compute capability will work just fine.

It is worth noting though that the consumer cards (GeForce) are not able to deliver the same 64-bit floating point performance as the data-center/HPC cards (V100, A100, and upcoming H100), or the workstation grade Quadro cards. In this instance, it should work fine, but performance will not be as optimal.

It is possible to opt for 32-bit values through PennyLane, which should have good performance, though we still require some updates at the library level to fully support this.

However, if you are looking for a card to develop on, and be able to run on a HPC system, the suggested card should run just fine. Let me know if you have any follow-up questions.

Aside: we also have a device that can target older CUDA-capable hardware, though the features are more limited currently compared to lightning.gpu: https://github.com/PennyLaneAI/pennylane-lightning-kokkos Feel free to let me know if you wish to know more.

Thank you, mlxd!
I’m relieved to hear your advice.
We are considering using the H100.

If you have any published information on the requirements to get the best performance out of lightning.gpu, please let us know.

Hi @TM_MEME!

You should see the best performance for circuits with multiple expectation values, beyond 20 qubits, and with a number of layers of depth.

In this blog post you can see a comparison between CPU and GPU for different numbers of wires, layers and threads. The exact graph is for an older version of PennyLane but you can run the code below the graph to generate your own graph for the latest version.

Were you referring to these kinds of suggestions to improve performance or something else?

Hi @CatalinaAlbornoz !
Thank you for replying me.

Yes, this blog is very informative, thank you very much for the useful information!

I’m glad this was helpful @TM_MEME!