Issues Installing and Running PennyLane Lightning GPU

Hello!

I’ve recently been using PennyLane’s default.qubit device to simulate variational quantum circuits with fewer than 20 qubits. To enhance the speed of my simulations, I’m considering utilizing a GPU, and the PennyLane Lightning GPU has piqued my interest. However, I faced some issues after installing it and during my initial test run.

Here’s the test command I used:

dev = qml.device("lightning.gpu", wires=1)

The following warnings appeared:

UserWarning: libcustatevec.so.1: cannot open shared object file: No such file or directory
  warn(str(e), UserWarning)
UserWarning: 
                "Pre-compiled binaries for lightning.gpu are not available. Falling back to "
                "using the Python-based default.qubit implementation. To manually compile from "
                "source, follow the instructions at "
                "https://pennylane-lightning.readthedocs.io/en/latest/installation.html.",
            
  warn(

Based on these warnings, it appears that the GPU version isn’t operating, and the system has defaulted to the default.qubit implementation . I’ve verified with nvidia-smi that there are no active processes running on my GPU.

For clarity, these are the installation steps I undertook:

conda create ---name python39pennylane-gpu python=3.9
conda activate python39pennylane-gpu
python -m pip install pennylane
python -m pip install cuquantum-python
python -m pip install pennylane-lightning[gpu]

Further, here are some relevant details from qml.about():

Name: PennyLane
Version: 0.33.0
Summary: PennyLane is a Python quantum machine learning library by Xanadu Inc.
Home-page: https://github.com/PennyLaneAI/pennylane
Author: 
Author-email: 
License: Apache License 2.0
Location: ../anaconda2/envs/python39pennylane-gpu/lib/python3.9/site-packages
Requires: appdirs, autograd, autoray, cachetools, networkx, numpy, pennylane-lightning, requests, rustworkx, scipy, semantic-version, toml, typing-extensions
Required-by: PennyLane-Lightning, PennyLane-Lightning-GPU

Platform info:           Linux-4.15.0-200-generic-x86_64-with-glibc2.27
Python version:          3.9.18
Numpy version:           1.26.1
Scipy version:           1.11.3
Installed devices:
- default.gaussian (PennyLane-0.33.0)
- default.mixed (PennyLane-0.33.0)
- default.qubit (PennyLane-0.33.0)
- default.qubit.autograd (PennyLane-0.33.0)
- default.qubit.jax (PennyLane-0.33.0)
- default.qubit.legacy (PennyLane-0.33.0)
- default.qubit.tf (PennyLane-0.33.0)
- default.qubit.torch (PennyLane-0.33.0)
- default.qutrit (PennyLane-0.33.0)
- null.qubit (PennyLane-0.33.0)
- lightning.qubit (PennyLane-Lightning-0.33.0)
- lightning.gpu (PennyLane-Lightning-GPU-0.33.0)

The status of my GPU, as displayed by nvidia-smi , is:

Tue Oct 31 20:52:55 2023       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 470.86       Driver Version: 470.86       CUDA Version: 11.4     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA GeForce ...  Off  | 00000000:08:00.0 Off |                  N/A |
|  0%   32C    P8    10W / 260W |      3MiB / 11019MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
|   1  NVIDIA GeForce ...  Off  | 00000000:86:00.0 Off |                  N/A |
|  0%   35C    P8    21W / 260W |      3MiB / 11019MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
|   2  NVIDIA GeForce ...  Off  | 00000000:8A:00.0 Off |                  N/A |
|  0%   35C    P8     1W / 260W |      3MiB / 11019MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+

I would greatly appreciate any guidance or insights to help address this issue. Thank you!

1 Like

Hi @LittleFive

Thanks for your interest in using lightning.gpu. There are a few things worth noting for use here:

  • Firstly, you can try upgrading the lightning.gpu installed package as python -m pip install pennylane pennylane-lightning pennylane-lightning-gpu --upgrade to ensure the version is the most recent (v0.33.1 as of writing). Once this is installed and up to date, you can try pip install custatevec-cu11 to bring in the CUDA 11 variant of the NVIDIA cuQuantum custatevec library.
  • lightning.gpu requires a CUDA compute capability of SM7.0 or newer (Volta era and newer) to work. This is a hard requirement from the NVIDIA cuQuantum SDK (specifically the custatevec library), as the library expects data-center/HPC grade hardware to run. I cannot tell which GPUs you have installed, but it may be possible they are not supported by the library. You can inspect the compute capabilities for a variety of NVIDIA devices here
  • Assuming a supported GPU, the lightning.gpu simulator requires a minimum CUDA version of 11.5 runtime libraries to operate. I see your version on the platform is 11.4. If you can upgrade the installed drivers and SDK to 11.5 at a minimum (11.8 preferred), the libraries will work as expected. If you cannot upgrade the SDK, you can attempt to install nvidia-cusparse-cu11, nvidia-cublas-cu11, and nvidia-cuda-runtime-cu11 to your python virtualenv, and these may allow you to operate, assuming the installed hardware driver is of a supported version.
  • If your hardware isn’t supported by NVIDIA cuQuantum, but you can install a CUDA 12 toolchain, you can try to build our other HPC device lightning.kokkos, with instructions here. This require some manual steps to build from source, but may get you GPU support, depending on your hardware type. We require CUDA 12 here as the compiler toolchain for CUDA 11 does not allow some of the newer features we added.
  • If none of the above are valid options, you can swap default.qubit for our C++ backed CPU simulators lightning.qubit and our CPU-variant of lightning.kokkos. When you install PennyLane as pip install pennylane you get lightning.qubit too, so this should work directly. For the OpenMP variant of lightning.kokkos you can run pip install pennylane-lightning[kokkos] and the package will be installed for you.

Hopefully the above steps help you get running with one of the above devices. If not, feel free to let us know and we can try to help out.

Thank you for your reply! I followed your suggestions step by step, and here are the results:

  • I successfully updated the lightning.gpu. However, when I tried dev = qml.device("lightning.gpu", wires=1) again, the same warning as before appeared. After installing custatevec-cu11, the warning changed to:
RuntimeError: [/project/pennylane_lightning/core/src/simulators/lightning_gpu/utils/DataBuffer.hpp][Line:55][Method:DataBuffer]: Error in PennyLane Lightning: no error
  • My GPUs are NVIDIA GeForce RTX 2080 Ti, which meet the CUDA compute capability of SM7.0 or newer.
  • I updated my CUDA Toolkit to 11.8, but I did not reinstall the Driver. nvcc -V now shows:
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Wed_Sep_21_10:33:58_PDT_2022
Cuda compilation tools, release 11.8, V11.8.89
Build cuda_11.8.r11.8/compiler.31833905_0

However, nvidia-smi still displays the CUDA Version as 11.4, the same as when I first asked the question. I’m unsure if this discrepancy is significant or if it requires a Driver reinstallation. Additionally, I installed nvidia-cusparse-cu11, nvidia-cublas-cu11, and nvidia-cuda-runtime-cu11 in my Python virtual environment, but the RuntimeError persists.

Any further advice or insights would be greatly appreciated!

Thank you!

Hi @LittleFive

Unfortunately it will be difficult to reason the issue on your system, though it may be likely the problem is due to a too-old driver version. You can examine the CUDA compatibility documentation here and see if your installed versions are compatible, and if not you can try to follow their guidelines on handling forward-mode compatibility on your system.

Though, if the option is available, I recommend to upgrade the CUDA drivers to the latest available version for the 11.x series (in this case the 495 or 520 driver major versions). Let us know if this works for you.

This is not necessarily a fix, but I noticed the following behavior on my system where I am working with Jupyter Notebooks.

I noticed that the cell that contained the device and qnode definition always raised the warning mentioned earlier in the thread. But interestingly, running the same cell for a 2nd time seems to make the warning go away and even allows the usage of GPU as expected. I have no idea how this happens, and if anyone has any kind of explanation, I’d love to learn about it too.

Some details about my system:

I installed the libraries by:

pip install pennylane pennylane-lightning pennylane-lightning-gpu custatevec-cu11 --upgrade
pip install torch torchvision --index-url https://download.pytorch.org/whl/cu118

nvidia-smi:

| NVIDIA-SMI 535.112                Driver Version: 537.42       CUDA Version: 12.2     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA GeForce RTX 3060 ...    On  | 00000000:01:00.0  On |                  N/A |
| N/A   56C    P8               9W /  95W |    130MiB /  6144MiB |      0%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+

nvcc -V

nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Tue_Aug_15_22:02:13_PDT_2023
Cuda compilation tools, release 12.2, V12.2.140
Build cuda_12.2.r12.2/compiler.33191640_0

Hi @vishwa,

Thank you very much for reporting this behaviour!

It’s normal for warnings to disappear after they’ve already been shown to you. What’s interesting is that you can indeed use the GPUs despite the warning :thinking:.

Please let us know if you see any other strange behaviour.

Thanks again!

Hey @CatalinaAlbornoz .

It seems that I was mistaken earlier when I said ...running the same cell for a 2nd time seems to make the warning go away and even allows the usage of GPU as expected. You were right and it was just the warning that disappeared and I recently realized that the GPU utilization I was seeing on my system was not related to the Pennylane script.

Since then, I have been trying again to install the GPU plugin. I have ensured that custatevec-cu11 is installed and that my GPU is compatible. But still, the installation using pip has been unsuccessful for me.

I am now trying to install it from source but the instructions given on the documentation page are not fully clear. I cloned the github repo and was trying to run the commands mentioned in the documentation which include specifying a path to the CuQuantum SDK. I am unsure of how to find this path. To install CuQuantum, I used the commands specified here.

Can you please guide me through the next installation steps?
Thank you.

Hi @vishwa ,

I’m sorry to hear that you’re having trouble here. I think the issue might be your CUDA version. Can you please follow the following steps and let me know the output of qml.about()?

  1. Create a new virtual environment with Python=3.10
  2. Run python -m pip install pennylane custatevec-cu11 pennylane-lightning-gpu
  3. Run a simple code (such as the code below) to test that everything works
import pennylane as qml

dev = qml.device("lightning.gpu", wires=2)

@qml.qnode(dev)
def circuit():
  qml.Hadamard(wires=0)
  qml.CNOT(wires=[0,1])
  return qml.expval(qml.PauliZ(0))

circuit()
  1. If the code above returns an error please post the full error traceback and the output of qml.about()

Note: the code above will probably run faster using lightning.qubit on a CPU that using lightning.gpu on a GPU. For computations under 20 qubits I recommend using lightning.qubit . However for the purposes of this test we use lightning.gpu.

If you can’t downgrade your CUDA version to 11.5-11.8 you might need to use lightning.kokkos instead (which is also a GPU simulator but different).

Please let me know how it goes with the above steps.

Hello @CatalinaAlbornoz,

Sorry for the late response. I followed your instructions and it resulted in an error. This is the output that I get:

/home/vishwa/QC/604/quantum-image-classifier/.venv3-10/lib/python3.10/site-packages/pennylane_lightning/lightning_gpu/lightning_gpu.py:74: UserWarning: libcusparse.so.11: cannot open shared object file: No such file or directory
  warn(str(e), UserWarning)
/home/vishwa/QC/604/quantum-image-classifier/.venv3-10/lib/python3.10/site-packages/pennylane_lightning/lightning_gpu/lightning_gpu.py:995: UserWarning: 
                "Pre-compiled binaries for lightning.gpu are not available. Falling back to "
                "using the Python-based default.qubit implementation. To manually compile from "
                "source, follow the instructions at "
                "https://pennylane-lightning.readthedocs.io/en/latest/installation.html.",
            
  warn(

qml.about():

Name: PennyLane
Version: 0.34.0
Summary: PennyLane is a Python quantum machine learning library by Xanadu Inc.
Home-page: https://github.com/PennyLaneAI/pennylane
Author: 
Author-email: 
License: Apache License 2.0
Location: /home/vishwa/QC/604/quantum-image-classifier/.venv3-10/lib/python3.10/site-packages
Requires: appdirs, autograd, autoray, cachetools, networkx, numpy, pennylane-lightning, requests, rustworkx, scipy, semantic-version, toml, typing-extensions
Required-by: PennyLane-Lightning, PennyLane-Lightning-GPU

Platform info:           Linux-5.15.133.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
Python version:          3.10.12
Numpy version:           1.26.3
Scipy version:           1.11.4
Installed devices:
- lightning.qubit (PennyLane-Lightning-0.34.0)
- default.gaussian (PennyLane-0.34.0)
- default.mixed (PennyLane-0.34.0)
- default.qubit (PennyLane-0.34.0)
- default.qubit.autograd (PennyLane-0.34.0)
- default.qubit.jax (PennyLane-0.34.0)
- default.qubit.legacy (PennyLane-0.34.0)
- default.qubit.tf (PennyLane-0.34.0)
- default.qubit.torch (PennyLane-0.34.0)
- default.qutrit (PennyLane-0.34.0)
- null.qubit (PennyLane-0.34.0)
- lightning.gpu (PennyLane-Lightning-GPU-0.34.0)

How shall I proceed?

Hi @vishwa ,

I’m sorry to hear that you’re still facing issues.
What is the output of nvidia-smi? Is your CUDA version between 11.5-11.8?

If your CUDA version is not within this range (e.g. if it’s still version 12) then the CUDA version is what’s causing issues.

If is indeed between 11.5-11.8 then maybe the best option is installing it through Docker as explained here. Let me know if this works for you.

Yeah, I can see that the CUDA version still is v12.
Here is the full output:

Thu Jan 18 16:25:42 2024       
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.112                Driver Version: 537.42       CUDA Version: 12.2     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA GeForce RTX 3060 ...    On  | 00000000:01:00.0 Off |                  N/A |
| N/A   62C    P8              12W /  91W |      0MiB /  6144MiB |      0%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
                                                                                         
+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
|  No running processes found                                                           |
+---------------------------------------------------------------------------------------+

I would like to avoid downgrading the CUDA version globally to 11.8. Is there a way I can install it in the same virtual environment?

Thanks,

Hi @vishwa

Recently the Nvidia CUDA libraries and runtimes were added as PyPI packages, so you may be able to run the following even with the CUDA 12 SDK and recent drivers

pip install nvidia-cusparse-cu11 nvidia-cublas-cu11 nvidia-cuda-runtime-cu11 custatevec_cu11

Can you try this and let us know if it works?

That worked!
Thank you @mlxd and @CatalinaAlbornoz for the amazing support!!!

1 Like