Issues running LightningGPU

Hello! I’m trying to use LightningGPU to speed up my code, and I’m running into installation issues. Here is the line I’m using:

dev = qml.device("lightning.gpu", wires=qubits)

And here is the error message I’m receiving:

/home/imrannasrullah/anaconda3/lib/python3.11/site-packages/pennylane_lightning/lightning_gpu/ UserWarning: cannot open shared object file: No such file or directory
  warn(str(e), UserWarning)
/home/imrannasrullah/anaconda3/lib/python3.11/site-packages/pennylane_lightning/lightning_gpu/ UserWarning: 
                "Pre-compiled binaries for lightning.gpu are not available. Falling back to "
                "using the Python-based default.qubit implementation. To manually compile from "
                "source, follow the instructions at "

For context here are some relevant details from: qml.about():

Name: PennyLane
Version: 0.33.1
Summary: PennyLane is a Python quantum machine learning library by Xanadu Inc.
License: Apache License 2.0
Location: /home/imrannasrullah/anaconda3/lib/python3.11/site-packages
Requires: appdirs, autograd, autoray, cachetools, networkx, numpy, pennylane-lightning, requests, rustworkx, scipy, semantic-version, toml, typing-extensions
Required-by: PennyLane-Lightning, PennyLane-Lightning-GPU

Platform info:           Linux-
Python version:          3.11.5
Numpy version:           1.24.3
Scipy version:           1.11.4
Installed devices:
- default.gaussian (PennyLane-0.33.1)
- default.mixed (PennyLane-0.33.1)
- default.qubit (PennyLane-0.33.1)
- default.qubit.autograd (PennyLane-0.33.1)
- default.qubit.jax (PennyLane-0.33.1)
- default.qubit.legacy (PennyLane-0.33.1)
- (PennyLane-0.33.1)
- default.qubit.torch (PennyLane-0.33.1)
- default.qutrit (PennyLane-0.33.1)
- null.qubit (PennyLane-0.33.1)
- lightning.qubit (PennyLane-Lightning-0.33.1)
- lightning.gpu (PennyLane-Lightning-GPU-0.33.1)

And here is the status of my GPU, as displayed by nvidia-smi:

Tue Jan 16 20:52:36 2024
| NVIDIA-SMI 525.60.12    Driver Version: 527.41       CUDA Version: 12.0     |
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|   0  NVIDIA GeForce ...  On   | 00000000:F3:00.0 Off |                  N/A |
| N/A   44C    P0    N/A /  N/A |      0MiB /  4096MiB |      0%      Default |
|                               |                      |                  N/A |

| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|  No running processes found                                                 |

On a side note, I’m not sure why qml.about() is showing the version if python to be 3.11.5, because when I run the command:
!python --version on Jupyter notebook, it returns version 3.9.18, which I expect because I launched this notebook from a virtual environment that has Python version 3.9 on it (because I know to use LightningGPU, you have to be between versions 3.7-3.10)

If anybody has this issue, please lmk!

I’ve also used these lines of code to setup the virtual environment and the pennylane relevant libraries:

conda create --name LGPU python=3.9
source activate LGPU
pip install pennylane
python -m pip install cuquantum-python
pip install pennylane-lightning[gpu]
pip install pennylane custatevec-cu11 pennylane-lightning-gpu

Hello @ImranNasrullah

There are a lot of factors that could be involved here, and sadly it is a bit hard to pinpoint. Maybe reading through this thread could be helpful?



Hello! I’ll look through it. Do you know if it’s an issue that my qml.about() is showing python 11.5? As I said, when I do !python --version it says its version 3.9.18. Not sure why there’s a discrepancy between the 2 and which one to “trust.”



I’m not sure why there’s that mismatch between Python versions. It could be something with the Python installation or your virtual environment. But lightning.gpu should work with either version.

I’ll try to solve the problem on my side and let you know if I have any success :slight_smile: .