Hello,
I am trying to use “lightning.gpu” device to simulate and optimize a 16-qubit variational classifier. When I put my model training and watch nvidia-smi
, seems the process doesn’t use GPU memory. This is the output for qml.about()
:
ame: PennyLane
Version: 0.32.0
Summary: PennyLane is a Python quantum machine learning library by Xanadu Inc.
Home-page: https://github.com/PennyLaneAI/pennylane
Author:
Author-email:
License: Apache License 2.0
Location: /opt/venv/lib/python3.10/site-packages
Requires: appdirs, autograd, autoray, cachetools, networkx, numpy, pennylane-lightning, requests, rustworkx, scipy, semantic-version, toml, typing-extensions
Required-by: PennyLane-Lightning, PennyLane-Lightning-GPU
Platform info: Linux-5.15.0-88-generic-x86_64-with-glibc2.35
Python version: 3.10.12
Numpy version: 1.23.5
Scipy version: 1.11.2
Installed devices:
- default.gaussian (PennyLane-0.32.0)
- default.mixed (PennyLane-0.32.0)
- default.qubit (PennyLane-0.32.0)
- default.qubit.autograd (PennyLane-0.32.0)
- default.qubit.jax (PennyLane-0.32.0)
- default.qubit.tf (PennyLane-0.32.0)
- default.qubit.torch (PennyLane-0.32.0)
- default.qutrit (PennyLane-0.32.0)
- null.qubit (PennyLane-0.32.0)
- lightning.gpu (PennyLane-Lightning-GPU-0.32.0)
- lightning.qubit (PennyLane-Lightning-0.32.0)
And the output for nvidia-smi
:
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 545.23.06 Driver Version: 545.23.06 CUDA Version: 12.3 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA TITAN V On | 00000000:09:00.0 Off | N/A |
| 42% 60C P2 42W / 250W | 353MiB / 12288MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
| 0 N/A N/A 1403 G /usr/lib/xorg/Xorg 4MiB |
| 0 N/A N/A 2932528 C /opt/venv/bin/python3 344MiB |
+---------------------------------------------------------------------------------------+
My quantum circuit is defined this way:
Nq = 16
n_l = 1
params = np.random.random((n_l,Nq,3), requires_grad=True)
dev = qml.device("lightning.gpu", wires=Nq)
@qml.qnode(dev)
def Qcircuit(x,params):
for l in range(n_l):
for i, q in enumerate(x):
qml.RX(q,i)
qml.StronglyEntanglingLayers(params[l,:,:].reshape((1,16,-1)), wires= range(Nq))
return [qml.expval(qml.PauliZ(i)) for i in range(Nq)]
Loss function:
def loss_fn(X,y, params):
expvals = np.array([Qcircuit(x,params) for x in X])
preds = np.sum(expvals, axis= 1)/expvals.shape[1]
loss = np.sum((preds-y)**2.0)
return loss
And training:
opt = qml.AdamOptimizer(stepsize= stepsize)
for i in range(max_it):
(_, _, params), cost = opt.step_and_cost(loss_fn, X_train, y_train ,params)
Is there a way of ensuring that the GPU is being used? Should I change my code to optimize GPU usage?
Thank you for your attention!