WARNING: INSUFFICIENT SUPPORT DETECTED FOR GPU DEVICE WITH `lightning.gpu`

Hello. I try to use the lightning.gpu device since my task requires 18 wires. The code runs fluently on my laptop, but when I deployed the same environment on my desktop, I receive the following error info:

/home/liukdiihmieu/miniconda3/envs/lab/lib/python3.9/site-packages/pennylane_lightning_gpu/lightning_gpu.py:77: UserWarning: CUDA device is an unsupported version: (5, 2)
warn(str(e), UserWarning)
/home/liukdiihmieu/miniconda3/envs/lab/lib/python3.9/site-packages/pennylane_lightning_gpu/lightning_gpu.py:428: RuntimeWarning:
!!!#####################################################################################
!!!
!!! WARNING: INSUFFICIENT SUPPORT DETECTED FOR GPU DEVICE WITH lightning.gpu
!!! DEFAULTING TO CPU DEVICE lightning.qubit
!!!
!!!#####################################################################################

Is that because my desktop GPU, GTX960, is too outdated? (Iā€™m using RTX3050 on the laptop.) I already installed the cuQuantum SDK via conda, with CUDA version 11.6. Thanks in advance.

other information:

python: 3.9.13
pennylane: 0.25.1
pennylane-lightning: 0.25.1
pennylane-lightning-gpu: 0.25.0
pytorch: 1.12.1
cupy: 10.4
cuquantum-python: 22.03
numpy: 1.23.2
OS: ubuntu 22.04 LTS

Hi @lynx, thank you for your question. Could you please post the output of qml.about()?

Sure.

Name: PennyLane
Version: 0.25.1
Summary: PennyLane is a Python quantum machine learning library by Xanadu Inc.
Home-page: GitHub - PennyLaneAI/pennylane: PennyLane is a cross-platform Python library for differentiable programming of quantum computers. Train a quantum computer the same way as a neural network.
Author:
Author-email:
License: Apache License 2.0
Location: /home/liukdiihmieu/miniconda3/envs/lab/lib/python3.9/site-packages
Requires: appdirs, autograd, autoray, cachetools, networkx, numpy, pennylane-lightning, retworkx, scipy, semantic-version, toml
Required-by: PennyLane-Lightning, PennyLane-Lightning-GPU

Platform info: Linux-5.15.0-47-generic-x86_64-with-glibc2.35
Python version: 3.9.13
Numpy version: 1.23.1
Scipy version: 1.9.1
Installed devices:

  • default.gaussian (PennyLane-0.25.1)
  • default.mixed (PennyLane-0.25.1)
  • default.qubit (PennyLane-0.25.1)
  • default.qubit.autograd (PennyLane-0.25.1)
  • default.qubit.jax (PennyLane-0.25.1)
  • default.qubit.tf (PennyLane-0.25.1)
  • default.qubit.torch (PennyLane-0.25.1)
  • default.qutrit (PennyLane-0.25.1)
  • lightning.qubit (PennyLane-Lightning-0.25.1)
  • lightning.gpu (PennyLane-Lightning-GPU-0.25.0)

Hi @lynx, it seems that the GTX960 has a Maxwell 2.0 architecture but only NVIDIA Volta and Ampere architectures are supported by lightning.gpu.

Instead your RTX3050 does have Ampere architecture, which is why it works well.

Please let me know if you have any further questions!

Thanks for your reply! I tried once more on a server with NVIDIA Titan RTX and it succeeded in running with lightning.gpu, so I think that Turing architecture is also compatible? Could you please let me know the least hardware requisite?

Hi @lynx, this is a happy surprise for me since cuStateVec (a component of cuQuantum) is only supposed to be supported for Volta and Ampere architectures according to their own documentation. Iā€™m sure not sure why it works with Turing too.

1 Like

Hi @lynx cuQuantum itself is built for CUDA compute capability SM 7.0 and above. In this instance, the GTX 960 compute capability is SM 5.2 as reported by the warning message:

UserWarning: CUDA device is an unsupported version: (5, 2)

The RTX 3050 is SM 8.6 and the Titan RTX is SM 7.5, which means these should work fine, but the GTX 960 will not.

The NVIDIA GPU->compute capabilities can be found here. As mentioned by @CatalinaAlbornoz this is a limit imposed by cuStateVec, and so we cannot support the older generation of cards.

If you need to use an older card, we have a new device under development that can be compiled by the user for older GPU architectures here. Since this is under active development, we do not guarantee it will work in all of the same cases as lightning.gpu. Assuming you are running on Linux, and have an active CUDA installation, you can create a working install within the repository with:

BACKEND="CUDA" python -m pip install -e .
python -m pip install git+https://github.com/PennyLaneAI/pennylane.git@master

Though, I suggest using lightning.gpu where possible.

1 Like