Performant combination of PennyLane and TensorFlow

Hello!

I am currently working on a project that involves doing many batched simulations of a quantum circuit in order to optimize controls. We use TensorFlow for the optimization and have written a rudimentary custom state vector simulation for our circuits of interest, as TensorFlow Quantum does not allow differentiation of the state (as far as we know). Recently we found out about PennyLane and it seems like a great fit for replacing our custom simulator.

I’ve been researching for a few days now to see if PennyLane would be a simple drop in replacement for us, or if it would require extensive re-writes of the workflow. I mainly seek some clarifications and advice here.

The part where I’m currently stuck is figuring out the most performant way to use PennyLane with TensorFlow. In the demos I’ve seen either the code is run in eager-mode or using the Keras Layer, which does not really help us as we do not use Keras. With the custom simulator we wrap everything in tf.function and turn jit_compile on to use XLA. Therefore one of my first questions has been if QNodes can also handle this.

Following this pull request I assumed it would work fine, but when I tried it locally it failed for ‘default.mixed’ and threw warnings for ‘default.qubit’ (see below). I’m not sure if this is intended, a bug with PennyLane or a problem with my setup.

But it’s also a bit besides the point. On a high level, could you please give me some advice on how to combine TensorFlow and PennyLane in a performant manner, e.g. what devices are suitable for XLA optimization? If this is not really an intended usecase, I would guess my next step should be looking into JAX and Catalyst, right?

Code example mentioned above:

import tensorflow as tf
import pennylane as qml

dev = qml.device("default.qubit", wires=1)
@qml.qnode(dev, diff_method="backprop", interface="tf")
def circuit(p):
    qml.RX(p,wires=0)
    return qml.expval(qml.PauliZ(0))

x = tf.Variable(1.0)
tf.function(circuit)(x)

Here are the warnings I get from the last line:

WARNING:tensorflow:AutoGraph could not transform <function circuit at 0x7fd82b715260> and will run it as-is.
Cause: Unable to locate the source code of <function circuit at 0x7fd82b715260>. Note that functions defined in certain environments, like the interactive Python shell, do not expose their source code. If that is the case, you should define them in a .py source file. If you are certain the code is graph-compatible, wrap the call using @tf.autograph.experimental.do_not_convert. Original error: could not get source code
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
WARNING: AutoGraph could not transform <function circuit at 0x7fd82b715260> and will run it as-is.
Cause: Unable to locate the source code of <function circuit at 0x7fd82b715260>. Note that functions defined in certain environments, like the interactive Python shell, do not expose their source code. If that is the case, you should define them in a .py source file. If you are certain the code is graph-compatible, wrap the call using @tf.autograph.experimental.do_not_convert. Original error: could not get source code
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
WARNING:tensorflow:AutoGraph could not transform <bound method DefaultQubit._setup_execution_config of <default.qubit device (wires=1) at 0x7fd89d95b0e0>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: (<gast.gast.NamedExpr object at 0x7fd844522710>, (mcm_method := mcm_config.mcm_method))
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
WARNING: AutoGraph could not transform <bound method DefaultQubit._setup_execution_config of <default.qubit device (wires=1) at 0x7fd89d95b0e0>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: (<gast.gast.NamedExpr object at 0x7fd844522710>, (mcm_method := mcm_config.mcm_method))
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
WARNING:tensorflow:AutoGraph could not transform TransformProgram(validate_device_wires, mid_circuit_measurements, decompose, validate_measurements, _conditional_broastcast_expand, no_sampling) and will run it as-is.
Cause: mangled names are not yet supported
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
WARNING: AutoGraph could not transform TransformProgram(validate_device_wires, mid_circuit_measurements, decompose, validate_measurements, _conditional_broastcast_expand, no_sampling) and will run it as-is.
Cause: mangled names are not yet supported
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
WARNING:tensorflow:AutoGraph could not transform TransformProgram() and will run it as-is.
Cause: mangled names are not yet supported
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
WARNING: AutoGraph could not transform TransformProgram() and will run it as-is.
Cause: mangled names are not yet supported
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert

I’m running this in WSL2 with a Python 3.12 Conda environment. TensorFlow is installed with version 2.16.2, which should be supported according to this development guide. Here is also my output of qml.about().

Name: PennyLane
Version: 0.41.1
Summary: PennyLane is a cross-platform Python library for quantum computing, quantum machine learning, and quantum chemistry. Train a quantum computer the same way as a neural network.
Home-page: https://github.com/PennyLaneAI/pennylane
Author:
Author-email:
License: Apache License 2.0
Location: /home/elly/miniconda3/envs/pennylane/lib/python3.12/site-packages
Requires: appdirs, autograd, autoray, cachetools, diastatic-malt, networkx, numpy, packaging, pennylane-lightning, requests, rustworkx, scipy, tomlkit, typing-extensions
Required-by: PennyLane_Lightning, PennyLane_Lightning_GPU

Platform info:           Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.39
Python version:          3.12.0
Numpy version:           1.26.4
Scipy version:           1.15.2
Installed devices:
- lightning.qubit (PennyLane_Lightning-0.41.1)
- default.clifford (PennyLane-0.41.1)
- default.gaussian (PennyLane-0.41.1)
- default.mixed (PennyLane-0.41.1)
- default.qubit (PennyLane-0.41.1)
- default.qutrit (PennyLane-0.41.1)
- default.qutrit.mixed (PennyLane-0.41.1)
- default.tensor (PennyLane-0.41.1)
- null.qubit (PennyLane-0.41.1)
- reference.qubit (PennyLane-0.41.1)
- lightning.gpu (PennyLane_Lightning_GPU-0.41.1)

Hi @ellyhae ,

It’s nice to see that PennyLane might help you!

If your goal is to do efficient large-scale simulations then I’d definitely recommend using JAX and Catalyst. While we do support the TensorFlow interface it’s not as performant and it will be less and less compatible with new (more performant) features over time.

If you have any issues migrating along the way please feel free to let us know here, we’ll be happy to support you. Or if there’s a feature that you’d like to see in PennyLane & Catalyst, that isn’t available, this is valuable feedback too.

We have a lot of content on the PennyLane website that can help you get started with using PennyLane + JAX. Take a look here and let us know how it goes!

Thank you for the response!

At the moment we only use noise-free qubit circuits, so a PennyLane + Catalyst + JAX setup would likely work. But with the effort required to rewrite the existing TensorFlow workflow it becomes important to consider our possible future requirements:

Noisy Simulation: As far as I could tell none of the devices currently supported by qjit can be used for noisy circuits. While the “qrack.simulator” plugin touts the support of all PennyLane operations and observables, which I assumed to include noisy channels, a quick test immediately told me its not supported. Please correct me if it should be working.

Qudit Simulation: This is farther off but might become relevant at some point. Therefore it’s no issue that it’s not supported yet, but I am curious if there is interest in developing in that direction.

I’d also like to mention that when looking through the PennyLane devices I found some Plugins (Qulacs and Cirq) with broken Tutorial links. The issue is simply that the links include “.html” at the end.

Hi @ellyhae ,

RE noisy simulations I believe it’s possible with the qrack simulator but it’s tricky. I believe the best person to ask about this is Dan Strano from the Unitary Foundation since he’s the one who built this device. Let me check with him to see if he can give us more info about this.

Qudit simulation isn’t supported and there are no plans to support it.

And thanks for pointing out these broken links! We’ll make sure to fix them.