Hey @jerteach!
This is an interesting idea to think about. Unfortunately, PennyLane doesn’t currently offer this capability. At present, one can create hybrid quantum-classical models, but training of both quantum and classical elements is performed classically. For example, a QNode can be trained by obtaining its gradient using the parameter-shift rule, where the quantum circuit is evaluated with a forward and backward shift. How the gradient is then handled (e.g., gradient descent, Adam optimizers, etc.) is still a classical task. Note that there are some “quantum-aware” optimizers like the QNGOptimizer, which access additional information about the quantum circuit, but right now we don’t have a quantum optimizer designed to train a classical model.
Sorry we couldn’t help with this! One thing I wanted to add if you’re looking at training hybrid models: you should be able to speed up training by enabling tape-mode and using a backprop-enabled device, e.g.,
import matplotlib.pyplot as plt
from sklearn.datasets import make_moons
import tensorflow as tf
import pennylane as qml
X, y = make_moons(n_samples=200, noise=0.1)
y_hot = tf.keras.utils.to_categorical(y, num_classes=2) # one-hot encoded labels
c = ["#1f77b4" if y_ == 0 else "#ff7f0e" for y_ in y] # colours for each class
plt.axis("off")
plt.scatter(X[:, 0], X[:, 1], c=c)
plt.show()
qml.enable_tape()
n_qubits = 2
dev = qml.device("default.qubit.tf", wires=n_qubits)
dev.C_DTYPE = tf.complex64
dev.R_DTYPE = tf.float32
@qml.qnode(dev, grad_method="backprop", interface="tf")
def qnode(inputs, weights):
qml.templates.AngleEmbedding(inputs, wires=range(n_qubits))
qml.templates.BasicEntanglerLayers(weights, wires=range(n_qubits))
return [qml.expval(qml.PauliZ(wires=i)) for i in range(n_qubits)]
n_layers = 6
weight_shapes = {"weights": (n_layers, n_qubits)}
qlayer = qml.qnn.KerasLayer(qnode, weight_shapes, output_dim=n_qubits)
clayer_1 = tf.keras.layers.Dense(2)
clayer_2 = tf.keras.layers.Dense(2, activation="softmax")
model = tf.keras.models.Sequential([clayer_1, qlayer, clayer_2])
opt = tf.keras.optimizers.SGD(learning_rate=0.2)
model.compile(opt, loss="mae", metrics=["accuracy"])
tf.keras.backend.set_floatx("float32")
X = X.astype("float32")
y_hot = y_hot.astype("float32")
fitting = model.fit(X, y_hot, epochs=6, batch_size=5, validation_split=0.25)
This should work with the latest release of PL (0.13.0).