Lightning.gpu doesn't seem to use GPUs

Hello,

I am trying to use “lightning.gpu” device to simulate and optimize a 16-qubit variational classifier. When I put my model training and watch nvidia-smi, seems the process doesn’t use GPU memory. This is the output for qml.about():

ame: PennyLane
Version: 0.32.0
Summary: PennyLane is a Python quantum machine learning library by Xanadu Inc.
Home-page: https://github.com/PennyLaneAI/pennylane
Author: 
Author-email: 
License: Apache License 2.0
Location: /opt/venv/lib/python3.10/site-packages
Requires: appdirs, autograd, autoray, cachetools, networkx, numpy, pennylane-lightning, requests, rustworkx, scipy, semantic-version, toml, typing-extensions
Required-by: PennyLane-Lightning, PennyLane-Lightning-GPU

Platform info:           Linux-5.15.0-88-generic-x86_64-with-glibc2.35
Python version:          3.10.12
Numpy version:           1.23.5
Scipy version:           1.11.2
Installed devices:
- default.gaussian (PennyLane-0.32.0)
- default.mixed (PennyLane-0.32.0)
- default.qubit (PennyLane-0.32.0)
- default.qubit.autograd (PennyLane-0.32.0)
- default.qubit.jax (PennyLane-0.32.0)
- default.qubit.tf (PennyLane-0.32.0)
- default.qubit.torch (PennyLane-0.32.0)
- default.qutrit (PennyLane-0.32.0)
- null.qubit (PennyLane-0.32.0)
- lightning.gpu (PennyLane-Lightning-GPU-0.32.0)
- lightning.qubit (PennyLane-Lightning-0.32.0)

And the output for nvidia-smi:

+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 545.23.06              Driver Version: 545.23.06    CUDA Version: 12.3     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA TITAN V                 On  | 00000000:09:00.0 Off |                  N/A |
| 42%   60C    P2              42W / 250W |    353MiB / 12288MiB |      0%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+

+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
|    0   N/A  N/A      1403      G   /usr/lib/xorg/Xorg                            4MiB |
|    0   N/A  N/A   2932528      C   /opt/venv/bin/python3                       344MiB |
+---------------------------------------------------------------------------------------+

My quantum circuit is defined this way:

Nq = 16
n_l = 1

params = np.random.random((n_l,Nq,3), requires_grad=True)
dev = qml.device("lightning.gpu", wires=Nq)

@qml.qnode(dev)
def Qcircuit(x,params):

    for l in range(n_l):
        for i, q in enumerate(x):
            qml.RX(q,i)
        qml.StronglyEntanglingLayers(params[l,:,:].reshape((1,16,-1)), wires= range(Nq))
        
    return [qml.expval(qml.PauliZ(i)) for i in range(Nq)]

Loss function:

def loss_fn(X,y, params):
    expvals = np.array([Qcircuit(x,params) for x in X])
    preds = np.sum(expvals, axis= 1)/expvals.shape[1]
    loss = np.sum((preds-y)**2.0)
    return loss

And training:

opt = qml.AdamOptimizer(stepsize= stepsize)
for i in range(max_it):
     (_, _, params), cost = opt.step_and_cost(loss_fn, X_train, y_train ,params)

Is there a way of ensuring that the GPU is being used? Should I change my code to optimize GPU usage?

Thank you for your attention!

Hi @psansebastian,

Thank you for your question!

We released a new version of PennyLane recently so it would be best if you could upgrade to the latest version.

You can try creating a new virtual environment with python 3.10 and installing PennyLane, custatevec and pennylane-lightning-gpu as follows:

python -m pip install pennylane custatevec-cu11 pennylane-lightning-gpu

If you want to use your existing environment you just need to add --upgrade at the end.

I also see that your CUDA is version 12.3 so I would recommend going to version 11.8 or following the instructions here if you want to stay in version 12.

Note that lightning.gpu only supports NVIDIA gpus with SM7.0 or higher so it’s possible that your GPU isn’t supported.

Please let me know if you have any additional questions! I hope this helps.

2 Likes

Hello, will there be broad PyTorch/Tensorflow GPU support using lightning.gpu/lightning.kokkos? Thank you. Benchmarking Quantum Algorithms & Parallel Architectures.pdf - Google Drive

Hey @kevinkawchak, it’s something that is on our radar but for now it’s a little lower on the priority list :slight_smile: