Hello!
I’ve recently been using PennyLane’s default.qubit
device to simulate variational quantum circuits with fewer than 20 qubits. To enhance the speed of my simulations, I’m considering utilizing a GPU, and the PennyLane Lightning GPU has piqued my interest. However, I faced some issues after installing it and during my initial test run.
Here’s the test command I used:
dev = qml.device("lightning.gpu", wires=1)
The following warnings appeared:
UserWarning: libcustatevec.so.1: cannot open shared object file: No such file or directory
warn(str(e), UserWarning)
UserWarning:
"Pre-compiled binaries for lightning.gpu are not available. Falling back to "
"using the Python-based default.qubit implementation. To manually compile from "
"source, follow the instructions at "
"https://pennylane-lightning.readthedocs.io/en/latest/installation.html.",
warn(
Based on these warnings, it appears that the GPU version isn’t operating, and the system has defaulted to the default.qubit
implementation . I’ve verified with nvidia-smi
that there are no active processes running on my GPU.
For clarity, these are the installation steps I undertook:
conda create ---name python39pennylane-gpu python=3.9
conda activate python39pennylane-gpu
python -m pip install pennylane
python -m pip install cuquantum-python
python -m pip install pennylane-lightning[gpu]
Further, here are some relevant details from qml.about()
:
Name: PennyLane
Version: 0.33.0
Summary: PennyLane is a Python quantum machine learning library by Xanadu Inc.
Home-page: https://github.com/PennyLaneAI/pennylane
Author:
Author-email:
License: Apache License 2.0
Location: ../anaconda2/envs/python39pennylane-gpu/lib/python3.9/site-packages
Requires: appdirs, autograd, autoray, cachetools, networkx, numpy, pennylane-lightning, requests, rustworkx, scipy, semantic-version, toml, typing-extensions
Required-by: PennyLane-Lightning, PennyLane-Lightning-GPU
Platform info: Linux-4.15.0-200-generic-x86_64-with-glibc2.27
Python version: 3.9.18
Numpy version: 1.26.1
Scipy version: 1.11.3
Installed devices:
- default.gaussian (PennyLane-0.33.0)
- default.mixed (PennyLane-0.33.0)
- default.qubit (PennyLane-0.33.0)
- default.qubit.autograd (PennyLane-0.33.0)
- default.qubit.jax (PennyLane-0.33.0)
- default.qubit.legacy (PennyLane-0.33.0)
- default.qubit.tf (PennyLane-0.33.0)
- default.qubit.torch (PennyLane-0.33.0)
- default.qutrit (PennyLane-0.33.0)
- null.qubit (PennyLane-0.33.0)
- lightning.qubit (PennyLane-Lightning-0.33.0)
- lightning.gpu (PennyLane-Lightning-GPU-0.33.0)
The status of my GPU, as displayed by nvidia-smi
, is:
Tue Oct 31 20:52:55 2023
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 470.86 Driver Version: 470.86 CUDA Version: 11.4 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce ... Off | 00000000:08:00.0 Off | N/A |
| 0% 32C P8 10W / 260W | 3MiB / 11019MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 1 NVIDIA GeForce ... Off | 00000000:86:00.0 Off | N/A |
| 0% 35C P8 21W / 260W | 3MiB / 11019MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 2 NVIDIA GeForce ... Off | 00000000:8A:00.0 Off | N/A |
| 0% 35C P8 1W / 260W | 3MiB / 11019MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
I would greatly appreciate any guidance or insights to help address this issue. Thank you!