If I use the pennylane.qnn.keras layer and use the loss function in tensorflow for this layer:
wires = 2
n_quantum_layers = 1
dev = qml.device("strawberryfields.fock", wires=wires, cutoff_dim=15)
@qml.qnode(dev)
def layer(inputs, w0, w1, w2, w3, w4, w5, w6, w7, w8, w9, w10):
qml.templates.DisplacementEmbedding(inputs, wires=range(wires))
qml.templates.CVNeuralNetLayers(w0, w1, w2, w3, w4, w5, w6, w7, w8, w9, w10, wires=range(wires))
return [qml.expval(qml.X(wires=i)) for i in range(wires)]
weights = qml.init.cvqnn_layers_all(n_quantum_layers, wires, seed=None)
weight_shapes = {"w{}".format(i): w.shape for i, w in enumerate(weights)}#{"x": wires}
n_actions = env.action_space.n
input_dim = env.observation_space.n
qlayer = qml.qnn.KerasLayer(layer, weight_shapes, output_dim=wires)
clayer_in = tf.keras.layers.Dense(wires,input_dim=input_dim)
clayer_out = tf.keras.layers.Dense(n_actions, activation = 'linear')
model = tf.keras.models.Sequential([clayer_in,qlayer,clayer_out])
model.compile(optimizer=tf.keras.optimizers.Adam(), loss = 'mse'
How are the weights actually being trained using a classical tool like the loss function for classical input? Is there more to this training of the parameters? More embedding or something?
For fun here is a visual presentation of the entire layer (input is a 9-dim vector and output is a 4-dim vector):