KeyError: ('torch', 'linalg.eigh') when using PennyLane with PyTorch

Description:

I’m encountering an issue where importing PyTorch causes PennyLane to throw a KeyError. When I use PennyLane on its own, everything works fine, but after installing PyTorch, I receive an error when running a simple PennyLane script.

Steps to Reproduce:

  1. Create a conda environment and install packages:
conda create --name pennylane_test python=3.9

conda activate pennylane_test

conda install pennylane

conda install pytorch torchvision lightning -c pytorch

Pennylane on his own will work fine, I tested this on a dummy pennylane python exemple as well as my own project. The problems start to arise after I installed pytorch…

  1. Create a Test Script (test.py):
import pennylane as qml

# Define a simple quantum device
dev = qml.device("default.qubit", wires=1)

@qml.qnode(dev)
def circuit():
    qml.PauliX(wires=0)
    return qml.expval(qml.PauliZ(0))

print(circuit())
  1. Run the Script:

Expected Behavior:

The script should output -1.0 without any errors.

Actual Behavior:

I receive the following error when running the script:

File "______", line 1, in <module>
  import pennylane as qml
File "______\pennylane\lib\site-packages\pennylane\__init__.py", line 29, in <module>
  import pennylane.kernels
File "______\pennylane\lib\site-packages\pennylane\kernels\__init__.py", line 18, in <module>
  from .cost_functions import (
File "______\pennylane\lib\site-packages\pennylane\kernels\cost_functions.py", line 19, in <module>
File "______\pennylane\lib\site-packages\pennylane\math\__init__.py", line 36, in <module>
  from .multi_dispatch import (
File "______\pennylane\lib\site-packages\pennylane\math\multi_dispatch.py", line 24, in <module>
  from . import single_dispatch  # pylint:disable=unused-import
File "______\pennylane\lib\site-packages\pennylane\math\single_dispatch.py", line 329, in <module>
  del ar.autoray._FUNCS["torch", "linalg.eigh"]
KeyError: ('torch', 'linalg.eigh')

Additional Information:

  • Operating System: Windows 10
  • Python Version: 3.9.20
    I tried different version of python: 3.9, 3.10, 3.12, 3.13
  • Conda Version: 24.9.2
  • PennyLane Version: 0.23.0
  • Pytorch Version: 2.3.1

Is this a known issue with compatibility between PennyLane and PyTorch? Also, do you have any suggestions on how to resolve this issue?

Thank you for your assistance!

Hi @Lazarus, welcome to the Forum!

I noticed a few things. I’ll add them as a list here for easier reference:

  1. Please use Python 3.10-3.12 since these are the currently supported versions.
  2. Please use the latest PennyLane version (v0.39 at the moment).
  3. While we recommend creating a new virtual environment with venv, Conda, or Miniconda, we generally recommend to install all libraries with pip instead of conda. This is because Conda will sometimes mess up package installations and versions so pip is better for this.
    Eg. the code below should be enough
conda create --name pennylane_test python=3.10
conda activate pennylane_test
python -m pip install pennylane torch
  1. I noticed you’re installing lightning. Note that the lightning.qubit device already comes pre-installed with PennyLane so you don’t need to install any additional library to use it.

Let me know if this solves your issues!

1 Like

Hi Catalina,

The “lightning” I installed refers to PyTorch Lightning, not lightning.qubit. You can find more about it here: PyTorch Lightning.

Thanks for your response and suggestions! I wanted to share my findings regarding the issue:

  • It’s odd that Conda installed an older version of PennyLane [0.23]… I didn’t specify any version, so I assumed it would install the latest one. To investigate, I tried installing PennyLane on its own via Conda in a new environment, to see if the package manager was responsible for the downgrade. It still installed version 0.23. When I tried to force version 0.39, Conda couldn’t find it on conda-forge or any other channel I tried.
  • Using pip instead worked perfectly. Pip installed the latest version (0.39), and everything is now working as expected.

Thanks again for your help!

Thanks for sharing your findings @Lazarus !

Let us know if you have any other questions.