MPI Enabled lightning@v0.32

I would like to try multi-gpu version of the lightning backend.
I build the wheel file by following:
https://docs.pennylane.ai/projects/lightning-gpu/en/stable/installation.html#build-pennylane-lightning-gpu-with-multi-node-multi-gpu-support
However, I realize that the test is not successful, because “GPUMPI_C” classes are not readable.

import pennylane as qml

from pennylane_lightning_gpu.lightning_gpu_qubit_ops import (
    LightningGPU_C128,
)
from pennylane_lightning_gpu.lightning_gpu_qubit_ops import (
    LightningGPUMPI_C128,
)

In reality, the above code gives the following error, whereas GPU_C128 is successfully imported.

ImportError: cannot import name 'LightningGPUMPI_C128' from 'pennylane_lightning_gpu.lightning_gpu_qubit_ops' (/home/nk/venv/lib/python3.10/site-packages/pennylane_lightning_gpu/lightning_gpu_qubit_ops.cpython-310-x86_64-linux-gnu.so)

CMake seems to find MPI compilers and libraries, and “ENABLE MPI” is ON in ccmake.

Is there anything that I need to double-check so that MPI is enabled?

import pennylane as qml
qml.about()
Name: PennyLane
Version: 0.32.0
Summary: PennyLane is a Python quantum machine learning library by Xanadu Inc.
Home-page: GitHub - PennyLaneAI/pennylane: PennyLane is a cross-platform Python library for differentiable programming of quantum computers. Train a quantum computer the same way as a neural network.
Author:
Author-email:
License: Apache License 2.0
Location: /home/nk/venv/lib/python3.10/site-packages
Requires: appdirs, autograd, autoray, cachetools, networkx, numpy, pennylane-lightning, requests, rustworkx, scipy, semantic-version, toml, typing-extensions
Required-by: PennyLane-Lightning, PennyLane-Lightning-GPU

Platform info: Linux-5.10.0-25-cloud-amd64-x86_64-with-glibc2.31
Python version: 3.10.12
Numpy version: 1.23.5
Scipy version: 1.11.3
Installed devices:

  • default.gaussian (PennyLane-0.32.0)
  • default.mixed (PennyLane-0.32.0)
  • default.qubit (PennyLane-0.32.0)
  • default.qubit.autograd (PennyLane-0.32.0)
  • default.qubit.jax (PennyLane-0.32.0)
  • default.qubit.tf (PennyLane-0.32.0)
  • default.qubit.torch (PennyLane-0.32.0)
  • default.qutrit (PennyLane-0.32.0)
  • null.qubit (PennyLane-0.32.0)
  • lightning.qubit (PennyLane-Lightning-0.32.0)
  • lightning.gpu (PennyLane-Lightning-GPU-0.32.0)

Hello @nk77 ,

Thank you for your interests in the distributed Pennylane-Lightning-GPU and reporting this issue.

Could you please let me know if the installed MPI library on your system is CUDA-aware? Currently, the distributed Pennylane-Lightning-GPU is built on top of the CUDA-aware MPI.

If you’ve OpenMPI installed, you could run the following commands:
ompi_info --parsable --all | grep mpi_built_with_cuda_support:value
The answer is yes if you get following message:
mca:mpi:base:param:**mpi_built_with_cuda_support:value**:true

Best,

Shuli

Thank you very much for your rapid reply.
I am actually using MPICH, and so, I cannot show the result.
In stead, I confirmed that my mpich works fine with GitHub - NCAR/mpi_cuda_hello: A test program to ensure basic CUDA and CUDA-aware MPI functionality.

$ mpirun -np 4 ./hello
----- ----- -----
Using 4 MPI Ranks and GPUs
----- ----- -----
Message before GPU computation: xxxxxxxxxxxx
----- ----- -----
rank 0 on host v100-4gpus, CPU: 2, GPU: 0, UUID: GPU-0fa1e452-35cd-5fb9-92fe-4dcdc5271c08
rank 1 on host v100-4gpus, CPU: 6, GPU: 0, UUID: GPU-0fa1e452-35cd-5fb9-92fe-4dcdc5271c08
rank 2 on host v100-4gpus, CPU: 4, GPU: 0, UUID: GPU-0fa1e452-35cd-5fb9-92fe-4dcdc5271c08
rank 3 on host v100-4gpus, CPU: 5, GPU: 0, UUID: GPU-0fa1e452-35cd-5fb9-92fe-4dcdc5271c08
----- ----- -----
 Message after GPU computation: Hello World!
----- ----- -----
 TEST SUCCESSFUL 
----- ----- -----

It seems that processes use the same GPU, but it might be OK in this context.

Hi @nk77 ,

Thanks for the information. Yes, it should be fine in this context. Could you let us know if the editable installation (see steps below) works for your setup? Could you also check your CUDA version and let us know?

  1. Create and activate a new virtual env.
  2. Install all required python packages (mpi4py, cuquantum and so on). Note that cuquantum has cuda-11 and cuda-12 options and please choose the one that is compatible with the CUDA installed in your system.
  3. Navigate to the pennylane-lightning-gpu directory and build pennylane-lightining-gpu module from its c++ source code with:
    python setup.py build_ext --define="PLLGPU_ENABLE_MPI=ON;CUQUANTUM_SDK=<path to sdk>" -v
    Don’t forget to replace <path to sdk> above with the path in your system.
  4. Then perform an editable mode installation with:
    python -m pip install -e .

Best,

Shuli

Thank you very much. It worked perfectly.
What was the main issue?

mpirun -np 2 python -m pytest mpitests/
============================= test session starts ==============================
platform linux -- Python 3.10.12, pytest-7.4.2, pluggy-1.3.0
rootdir: /home/nk/SRC/pennylane-lightning-gpu
collected 1635 items

mpitests/test_adjoint_jacobian.py ============================= test session starts ==============================
platform linux -- Python 3.10.12, pytest-7.4.2, pluggy-1.3.0
rootdir: /home/nk/SRC/pennylane-lightning-gpu
collected 1635 items

mpitests/test_adjoint_jacobian.py ................
2 Likes

Hi @nk77 ,

Thanks for the updates. I’m glad that it works for your system. The main issue could be the env. I would recommend to create a new env to install PennyLane-Lightning-GPU to avoid conflicts.

I also would like to let you know that we are working on moving Lightning-GPU to the [Lightning](https://github.com/PennyLaneAI/pennylane-lightning) repo. Once done, I would like to encourage you to have a try.

Best,

Shuli