Hello!
I am currently working on a project that involves doing many batched simulations of a quantum circuit in order to optimize controls. We use TensorFlow for the optimization and have written a rudimentary custom state vector simulation for our circuits of interest, as TensorFlow Quantum does not allow differentiation of the state (as far as we know). Recently we found out about PennyLane and it seems like a great fit for replacing our custom simulator.
I’ve been researching for a few days now to see if PennyLane would be a simple drop in replacement for us, or if it would require extensive re-writes of the workflow. I mainly seek some clarifications and advice here.
The part where I’m currently stuck is figuring out the most performant way to use PennyLane with TensorFlow. In the demos I’ve seen either the code is run in eager-mode or using the Keras Layer, which does not really help us as we do not use Keras. With the custom simulator we wrap everything in tf.function and turn jit_compile on to use XLA. Therefore one of my first questions has been if QNodes can also handle this.
Following this pull request I assumed it would work fine, but when I tried it locally it failed for ‘default.mixed’ and threw warnings for ‘default.qubit’ (see below). I’m not sure if this is intended, a bug with PennyLane or a problem with my setup.
But it’s also a bit besides the point. On a high level, could you please give me some advice on how to combine TensorFlow and PennyLane in a performant manner, e.g. what devices are suitable for XLA optimization? If this is not really an intended usecase, I would guess my next step should be looking into JAX and Catalyst, right?
Code example mentioned above:
import tensorflow as tf
import pennylane as qml
dev = qml.device("default.qubit", wires=1)
@qml.qnode(dev, diff_method="backprop", interface="tf")
def circuit(p):
qml.RX(p,wires=0)
return qml.expval(qml.PauliZ(0))
x = tf.Variable(1.0)
tf.function(circuit)(x)
Here are the warnings I get from the last line:
WARNING:tensorflow:AutoGraph could not transform <function circuit at 0x7fd82b715260> and will run it as-is.
Cause: Unable to locate the source code of <function circuit at 0x7fd82b715260>. Note that functions defined in certain environments, like the interactive Python shell, do not expose their source code. If that is the case, you should define them in a .py source file. If you are certain the code is graph-compatible, wrap the call using @tf.autograph.experimental.do_not_convert. Original error: could not get source code
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
WARNING: AutoGraph could not transform <function circuit at 0x7fd82b715260> and will run it as-is.
Cause: Unable to locate the source code of <function circuit at 0x7fd82b715260>. Note that functions defined in certain environments, like the interactive Python shell, do not expose their source code. If that is the case, you should define them in a .py source file. If you are certain the code is graph-compatible, wrap the call using @tf.autograph.experimental.do_not_convert. Original error: could not get source code
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
WARNING:tensorflow:AutoGraph could not transform <bound method DefaultQubit._setup_execution_config of <default.qubit device (wires=1) at 0x7fd89d95b0e0>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: (<gast.gast.NamedExpr object at 0x7fd844522710>, (mcm_method := mcm_config.mcm_method))
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
WARNING: AutoGraph could not transform <bound method DefaultQubit._setup_execution_config of <default.qubit device (wires=1) at 0x7fd89d95b0e0>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: (<gast.gast.NamedExpr object at 0x7fd844522710>, (mcm_method := mcm_config.mcm_method))
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
WARNING:tensorflow:AutoGraph could not transform TransformProgram(validate_device_wires, mid_circuit_measurements, decompose, validate_measurements, _conditional_broastcast_expand, no_sampling) and will run it as-is.
Cause: mangled names are not yet supported
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
WARNING: AutoGraph could not transform TransformProgram(validate_device_wires, mid_circuit_measurements, decompose, validate_measurements, _conditional_broastcast_expand, no_sampling) and will run it as-is.
Cause: mangled names are not yet supported
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
WARNING:tensorflow:AutoGraph could not transform TransformProgram() and will run it as-is.
Cause: mangled names are not yet supported
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
WARNING: AutoGraph could not transform TransformProgram() and will run it as-is.
Cause: mangled names are not yet supported
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
I’m running this in WSL2 with a Python 3.12 Conda environment. TensorFlow is installed with version 2.16.2, which should be supported according to this development guide. Here is also my output of qml.about()
.
Name: PennyLane
Version: 0.41.1
Summary: PennyLane is a cross-platform Python library for quantum computing, quantum machine learning, and quantum chemistry. Train a quantum computer the same way as a neural network.
Home-page: https://github.com/PennyLaneAI/pennylane
Author:
Author-email:
License: Apache License 2.0
Location: /home/elly/miniconda3/envs/pennylane/lib/python3.12/site-packages
Requires: appdirs, autograd, autoray, cachetools, diastatic-malt, networkx, numpy, packaging, pennylane-lightning, requests, rustworkx, scipy, tomlkit, typing-extensions
Required-by: PennyLane_Lightning, PennyLane_Lightning_GPU
Platform info: Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.39
Python version: 3.12.0
Numpy version: 1.26.4
Scipy version: 1.15.2
Installed devices:
- lightning.qubit (PennyLane_Lightning-0.41.1)
- default.clifford (PennyLane-0.41.1)
- default.gaussian (PennyLane-0.41.1)
- default.mixed (PennyLane-0.41.1)
- default.qubit (PennyLane-0.41.1)
- default.qutrit (PennyLane-0.41.1)
- default.qutrit.mixed (PennyLane-0.41.1)
- default.tensor (PennyLane-0.41.1)
- null.qubit (PennyLane-0.41.1)
- reference.qubit (PennyLane-0.41.1)
- lightning.gpu (PennyLane_Lightning_GPU-0.41.1)