How to use PassthruQNode with qml.qnn.KerasLayer

I built the following network with the following code:

n_qubits = 8
layers = 10
data_dimension = 8
dev = qml.device(“default.qubit.tf”, wires=n_qubits)

def create_quantum_model():
@qml.qnode(dev,interface=‘tf’,diff_method=‘backprop’)
def qnode(weights, inputs=None):
qml.templates.AmplitudeEmbedding(features=inputs, wires=range(n_qubits), normalize=True)
qml.templates.StronglyEntanglingLayers(weights, wires=range(n_qubits))
return [qml.expval(qml.PauliZ(i)) for i in range(n_qubits)]

weight_shapes = {"weights": (layers, n_qubits, 3)}

model = tf.keras.models.Sequential([
    tf.keras.layers.Flatten(input_shape=[16, 16]),
    qml.qnn.KerasLayer(qnode, weight_shapes, output_dim=n_qubits),
    tf.keras.layers.Dense(4, activation='softmax')
])
opt = tf.keras.optimizers.Adam(0.03)
model.compile(loss='categorical_crossentropy', optimizer=opt, metrics=["accuracy"])
return model

How to use PassthruQNode with qml.qnn.KerasLayer in the above code?

Hi @Cc1, welcome to the Forum!

I’m not sure what you mean by PassthruQNode . Could you please explain?

If you need ideas on how to use PennyLane with Keras you can follow this tutorial.

I hope this helps!

Thanks for your reply, sorry I just saw your message now.
I noticed PassthruQNode in this post, but I don’t know how to use it with qml.qnn.

@Cc1 the PassthruQNode is from an older version of PL, and no longer exists :slight_smile:

What error message are you getting with your code?

My current quantum neural network is slow, it takes about 800s to run an epoch, so I want to try to speed it up.

Hi @Cc1, maybe you can try using diff_method='adjoint'. What execution time do you get in this case?

Thank you for your reply, when I try to use 'diff_method=‘adjoint’ I get an error message.

I don’t know why I get such an error because my datatype is

dtype=float32

cannot compute Mul as input #1(zero-based) was expected to be a float tensor but is a double tensor [Op:Mul]

Hi @Cc1,

You can try adding the following line to your code:

tf.keras.backend.set_floatx('float32')

If this doesn’t work then maybe try

tf.keras.backend.set_floatx('float64')

And convert your dtype to float64.

Please let me know if this solves your issue!

Thank you for your reply. I will try to add the above two lines to the front of quantum_model.fit(). But none of them seem to work.

quantum_model = create_quantum_model()
tf.keras.backend.set_floatx('float64')
quantum_model.fit()

quantum_model = create_quantum_model()
tf.keras.backend.set_floatx('float32)
quantum_model.fit()

Hi @Cc1,

Does it work if you add this to the very beginning of your notebook, just after importing TensorFlow as tf?

If you follow this tutorial as-is, does it work?

And can you please post the output of qml.about()?

Thanks a lot for your help, it worked after I put
tf.keras.backend.set_floatx('float64') on the line after import tensorflow as tf. But trying to use diff_method=‘adjoint’ runs slower.
I added some samples for training. Running an epoch with diff_method='backprop' is about 1598s and diff_method=‘backprop’ seems to take 1:27:29
In addition, the output information of qml.about() is as follows:

Name: PennyLane
Version: 0.24.0
Summary: PennyLane is a Python quantum machine learning library by Xanadu Inc.
Home-page: GitHub - PennyLaneAI/pennylane: PennyLane is a cross-platform Python library for differentiable programming of quantum computers. Train a quantum computer the same way as a neural network.
Author:
Author-email:
License: Apache License 2.0
Location: c:\users\cc.conda\envs\ten_penne\lib\site-packages
Requires: appdirs, autograd, autoray, cachetools, networkx, numpy, pennylane-lightning, retworkx, scipy, semantic-version, toml
Required-by: PennyLane-Lightning

Platform info: Windows-10-10.0.19041-SP0
Python version: 3.7.0
Numpy version: 1.19.5
Scipy version: 1.7.3
Installed devices:

  • default.gaussian (PennyLane-0.24.0)
  • default.mixed (PennyLane-0.24.0)
  • default.qubit (PennyLane-0.24.0)
  • default.qubit.autograd (PennyLane-0.24.0)
  • default.qubit.jax (PennyLane-0.24.0)
  • default.qubit.tf (PennyLane-0.24.0)
  • default.qubit.torch (PennyLane-0.24.0)
  • lightning.qubit (PennyLane-Lightning-0.24.0)

Hi @Cc1,

Usually the fastest combination is using lightning.qubit as your device and diff_method='adjoint'.

I also see that you have a relatively old version of PennyLane. You can upgrade to the latest version of PennyLane by using python -m pip install pennylane --upgrade

We will release a new version on Tuesday so you may want to wait until then, although some parts of your code may break because of any breaking changes that may have introduced in the past few releases.

Please let me know if you have any questions!

Thank you for your help, when I try to run an epoch with diff_method=‘adjoint’ the time is about 28min
The time for diff_method=‘backprop’ is also about 28 minutes.

Hi @Cc1,
Unfortunately if you already use lightning.qubit as your device and diff_method='adjoint' then it will be hard to reduce your execution time. Your problem just may be too large.

You can try reducing the number of qubits needed or changing from using StronglyEntanglingLayers to a different ansatz. This may lower the execution time but it may also lower your accuracy.

Please let me know if you have any further questions about this.

I’m thinking about optimizing my model. Anyway, thanks a lot for your help.

I’m glad I could help @Cc1!

I hope you can find a way to optimize your model. You can also try using a different interface, so using JAX and JIT instead of TensorFlow. This may require a lot of work though or it may not be possible in your particular case.

Please let us know if you have any further questions.

Enjoy using PennyLane!

Thank you for your reply. I noticed that “lightning.gpu” can be used for acceleration. But when I try to install I get some errors. Does “lightning.gpu” only work under Linux?
If it can only run under linux system, then I may use lightning, but I don’t know how lightning calls more threads? Because I found that when I run my tasks, the CPU and memory usage is not very high.

Hi @Cc1 ,

Thank you for your question. I see that you have opened a similar question in this thread so I will answer there!