Who made the QML Tensorflow Demos

@glassnotes could someone who was part of the QML Tensorflow demos please reply?

I am most interested in https://pennylane.ai/qml/demos/tutorial_qnn_module_tf.html

I have been trying to save regular Tensorflow .pb keras models but it seems to be a feature to identify custom layers. See issue at

It seems that saving the weights is a work around, but that brings up a new issue, of re-creating the model with regular Keras layers.

Hi @jerteach! This is a known limitation of the qml.qnn.KerasLayer class—we currently do not support saving TensorFlow/Keras models.

However, this is something we are working on to support :slightly_smiling_face:

Thanks @josh. It just occurred to me that I don’t want to save a Quantum Keras Layer, I want to save a Dense (or some other standard) Keras Layer, that has been trained by a quantum computer. Then I would have no problem saving it as a standard Keras file and then it could be ported to other Machine Learning platforms.

Does that make sense? Does PennyLaneAI have any examples that are even close to this idea. I realise with the present number of Qubits this would have no immediate useful purpose, but as Qubit numbers increase this might have a speed advantage to training Classical Machine Learning models.

Hi @jerteach!

I want to save a Dense (or some other standard) Keras Layer, that has been trained by a quantum computer.

You may have to go into more detail here, I’m not 100% sure I follow :slight_smile: How are you thinking of training the standard Keras layer?

@josh. Lets see if I can explain.

I simplify Machine Learning model creation for High School students. I am not even trying to get everything perfectly accurate. I am just trying to get kids excited about their future tech opportunities. Since, I already have students who can make Machine Learning models for the web (TensorflowJS) and for Arduinos (TensorflowMicro), I thought it would make sense to show how Quantum Computing might be able to optimize one of the layers of a Keras model, that we could then put on the Web or an Arduino.

I believe PennyLaneAI is ahead of Tensorflow at abstracting away some of the difficulties of Quantum Computing. The example at https://pennylane.ai/qml/demos/tutorial_qnn_module_tf.html is fairly close to what I need. However it is not optimizing a Keras layer it is injecting a Quantum Layer into a Keras model.

I want the Quantum Computer to optimize a small Keras layer, and yes, I have know idea how to do that.

I have taken the above example and done all the other parts I need.

It has generic data entry, a fairly recognizable Keras format, I can save the model weights and then load them onto a regular Keras model and then save that model in Keras format. The only part I can’t do is make the Quantum Computer train a fully connected Keras layer.

Note: I am not looking for a speed improvement, or better results. I realize that Quantum Computers are going to change dramatically in the next 20 years. I am just trying to show my students that a Quantum Computer can be used to optimize regular Machine Learning models, like the ones they are already familiar with.

Perhaps Quantum Computers can’t optimize Keras layers and that is ok as well.

Hey @jerteach!

This is an interesting idea to think about. Unfortunately, PennyLane doesn’t currently offer this capability. At present, one can create hybrid quantum-classical models, but training of both quantum and classical elements is performed classically. For example, a QNode can be trained by obtaining its gradient using the parameter-shift rule, where the quantum circuit is evaluated with a forward and backward shift. How the gradient is then handled (e.g., gradient descent, Adam optimizers, etc.) is still a classical task. Note that there are some “quantum-aware” optimizers like the QNGOptimizer, which access additional information about the quantum circuit, but right now we don’t have a quantum optimizer designed to train a classical model.

Sorry we couldn’t help with this! One thing I wanted to add if you’re looking at training hybrid models: you should be able to speed up training by enabling tape-mode and using a backprop-enabled device, e.g.,

import matplotlib.pyplot as plt
from sklearn.datasets import make_moons
import tensorflow as tf
import pennylane as qml

X, y = make_moons(n_samples=200, noise=0.1)
y_hot = tf.keras.utils.to_categorical(y, num_classes=2)  # one-hot encoded labels

c = ["#1f77b4" if y_ == 0 else "#ff7f0e" for y_ in y]  # colours for each class
plt.axis("off")
plt.scatter(X[:, 0], X[:, 1], c=c)
plt.show()

qml.enable_tape()

n_qubits = 2
dev = qml.device("default.qubit.tf", wires=n_qubits)
dev.C_DTYPE = tf.complex64
dev.R_DTYPE = tf.float32

@qml.qnode(dev, grad_method="backprop", interface="tf")
def qnode(inputs, weights):
    qml.templates.AngleEmbedding(inputs, wires=range(n_qubits))
    qml.templates.BasicEntanglerLayers(weights, wires=range(n_qubits))
    return [qml.expval(qml.PauliZ(wires=i)) for i in range(n_qubits)]

n_layers = 6
weight_shapes = {"weights": (n_layers, n_qubits)}

qlayer = qml.qnn.KerasLayer(qnode, weight_shapes, output_dim=n_qubits)

clayer_1 = tf.keras.layers.Dense(2)
clayer_2 = tf.keras.layers.Dense(2, activation="softmax")
model = tf.keras.models.Sequential([clayer_1, qlayer, clayer_2])

opt = tf.keras.optimizers.SGD(learning_rate=0.2)
model.compile(opt, loss="mae", metrics=["accuracy"])

tf.keras.backend.set_floatx("float32")

X = X.astype("float32")
y_hot = y_hot.astype("float32")
fitting = model.fit(X, y_hot, epochs=6, batch_size=5, validation_split=0.25)

This should work with the latest release of PL (0.13.0).

Thanks @Tom_Bromley, tape_mode does run very fast and that is using float64 instead of the float32 I was using. When I compared them on Gitpod after setting both methods to float64. The old way took 1:29 min and the tape_mode took 0:59 min. So that is a real speed increase. Well done.

It seems that one of the issues of training Keras layers with a quantum computer is that they are not yet powerful enough to work with multiple float32 numbers. (Not my statement, this was a reply from Tensorflow Quantum to a comment I made here).

One thing working with Machine Learning on micro-controllers is we are always making the data smaller by Quantization of float32 to int8 for storage and then converting the numbers back to float32 during model execution. We lose some accuracy but gain a working model.

Is this conversion to a smaller number (int8 or even int4) something that would help optimize a Keras layer? Reminder, I am not interested in a speed boost or a more accurate model, I am just interested in proof of concept. That a quantum computer could be used to optimize a Keras layer so that Keras model can then be used on multiple types of machines.

@josh

Thanks @jerteach, that’s a good point regarding float64 and float32. I had a look again at the code and it is possible to have everything in float32 - I have updated the code above accordingly. We essentially need to change the dtype of the device (maybe we could look at making this more user-accessible in future!).

I also had a quick look at the link you shared to TFQ, I believe there they are referring to the idea of training with a quantum computer and the resulting difficulty in encoding numbers of different precision. In the above, we are still within the hybrid setting with training performed classically.