Quantum Neural Networks / Quanvolutional NN

Hey @Muhammad_Kashif,

In tutorials what is n_layers (only used while defining weight_shapes) and weight_shapes, in the first tutorial weight_shapes is two parameters and in the second one, it has three-parameter. Does it depend on the no. of qubits used, if yes then what of we want to lets say more than 10 qubits, what would be the weight_shapes then?

The weight_shapes parameter is intended to let the qml.qnn.KerasLayer know the shapes of the trainable parameters in the QNode. It should be a dictionary that maps argument name to shape, for example:

@qml.qnode(dev)
def qnode(inputs, w1, w2, w3):
    ...
    qml.RX(w1, wires=0)
    qml.Rot(w2, wires=1)
    qml.templates.StronglyEntanglingLayers(w3, wires=range(2))
    ...

In this case, we should have weight_shapes = {"w1": 1, "w2": 3, "w3": (n_layers, 2, 3)}. It is easy to see this for w1 and w2, since w1 feeds into the single-parameter RX gate and w2 feeds into the three-parameter Rot gate. The shape of w3 is a bit more complicated because it feeds into StronglyEntanglingLayers. In this case, the shape must be (n_layers, n_wires, 3). For StronglyEntanglingLayers, we have multiple layers that look like:


Each qubit has a Rot gate applied followed by an entangling block. We must hence specify the number of layers and number of wires. Since Rot has three parameters, the overall shape is (n_layers, n_wires, 3).

Woud the different data embedding techniques like amplitude embedding, angle embedding among others effect the underlying model’s performance? and also what could be the potential effect of StronglyEntanglingLayers or BasicEntanglingLayers on model performance?

Definitely! There is a lot of room to play about with different embeddings and layers - check out the literature above to get more of an understanding.

while printing the model summary in keras (for a model with quantum layers) does not show any zero trainable paramters for quantum layer and “unused” notification as well, why is that so? Below is the model code and corresponding screenshot

:thinking: That’s odd. I just tried the code below and the summary printed ok:

import pennylane as qml
import tensorflow as tf

n_qubits = 2
dev = qml.device("default.qubit", wires=n_qubits)

@qml.qnode(dev)
def qnode(inputs, weights):
    qml.templates.AngleEmbedding(inputs, wires=range(n_qubits))
    qml.templates.BasicEntanglerLayers(weights, wires=range(n_qubits))
    return [qml.expval(qml.PauliZ(wires=i)) for i in range(n_qubits)]

n_layers = 6
weight_shapes = {"weights": (n_layers, n_qubits)}

qlayer = qml.qnn.KerasLayer(qnode, weight_shapes, output_dim=n_qubits)

clayer_1 = tf.keras.layers.Dense(4)
qlayer_1 = qml.qnn.KerasLayer(qnode, weight_shapes, output_dim=n_qubits)
qlayer_2 = qml.qnn.KerasLayer(qnode, weight_shapes, output_dim=n_qubits)
clayer_2 = tf.keras.layers.Dense(2, activation="softmax")

# construct the model
inputs = tf.keras.Input(shape=(2,))
x = clayer_1(inputs)
x_1, x_2 = tf.split(x, 2, axis=1)
x_1 = qlayer_1(x_1)
x_2 = qlayer_2(x_2)
x = tf.concat([x_1, x_2], axis=1)
outputs = clayer_2(x)

model = tf.keras.Model(inputs=inputs, outputs=outputs)
model.predict(tf.ones((5, 2)))

with the result

>>> model.summary()
Model: "model"
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_1 (InputLayer)            [(None, 2)]          0                                            
__________________________________________________________________________________________________
dense (Dense)                   (None, 4)            12          input_1[0][0]                    
__________________________________________________________________________________________________
tf.split (TFOpLambda)           [(None, 2), (None, 2 0           dense[0][0]                      
__________________________________________________________________________________________________
keras_layer_1 (KerasLayer)      (None, 2)            12          tf.split[0][0]                   
__________________________________________________________________________________________________
keras_layer_2 (KerasLayer)      (None, 2)            12          tf.split[0][1]                   
__________________________________________________________________________________________________
tf.concat (TFOpLambda)          (None, 4)            0           keras_layer_1[0][0]              
                                                                 keras_layer_2[0][0]              
__________________________________________________________________________________________________
dense_1 (Dense)                 (None, 2)            10          tf.concat[0][0]                  
==================================================================================================
Total params: 46
Trainable params: 46
Non-trainable params: 0
__________________________________________________________________________________________________

(note the code is adapted from this tutorial).