0 paramters are uesd in the quantum circuit

Dear pennylane team, I build a hybird classical quantum network using qml.qnn.KerasLayer. But the summary shows that the paramter in the quantum circuit are not used. Here is my code:

dev = qml.device("default.qubit", wires=n_qubits)
# dev = qml.device("lightning.qubit", wires=n_qubits)
# dev = qml.device("lightning.gpu", wires=n_qubits)
@qml.qnode(dev)
# @qml.qnode(dev, diff_method='backprop', interface="tensorflow")
def circuit(inputs, weights):

    for h in range(n_qubits):
        qml.Hadamard(wires=h)

    for n_layer in range(n_layers):
        # Encoder
        for k0 in S:
            qml.CRY(phi=inputs[k0[0]+0*n_qubits], wires=(k0[0], k0[1]))  
        for k1 in S:
            qml.CRY(phi=inputs[k1[0]+1*n_qubits], wires=(k1[0], k1[1]))  

        # Ansatz
        for n1 in range(n_qubits):
            qml.RX(weights[n_layer, 0, n1], wires=n1)

        for n2 in range(n_qubits):
            qml.RX(weights[n_layer, 5, n2], wires=n2)

    return qml.expval(qml.PauliZ(0)), qml.expval(qml.PauliZ(1))

weight_shapes = {"weights": (n_layers, n_RXCRZRX, n_qubits)}       

input   = tf.keras.layers.Input(shape=(24,))
x1 = Dense(200, activation='relu', kernel_initializer='glorot_uniform', name='dense_1')(input)
x2 = Dropout(0.2, name='drouput_1')(x1)
x3 = Dense(100, activation='relu', kernel_initializer='glorot_uniform', name='dense_2')(x2)
x4 = Dropout(0.2, name='drouput_2')(x3)
x5 = Dense(24, activation="relu", kernel_initializer='glorot_uniform', name='dense_3')(x4)
x6   = qml.qnn.KerasLayer(circuit, weight_shapes, output_dim=2, name="QNN_1")(x5)
output  = Dense(2, activation="softmax", name='Softmax')(x6)

model = keras.Model(inputs=input, outputs=output)

and here is the summary:

Model: "model"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_1 (InputLayer)         [(None, 24)]              0         
_________________________________________________________________
dense_1 (Dense)              (None, 200)               5000      
_________________________________________________________________
drouput_1 (Dropout)          (None, 200)               0         
_________________________________________________________________
dense_2 (Dense)              (None, 100)               20100     
_________________________________________________________________
drouput_2 (Dropout)          (None, 100)               0         
_________________________________________________________________
dense_3 (Dense)              (None, 24)                2424      
_________________________________________________________________
QNN_1 (KerasLayer)           (None, 2)                 0 (unused)
_________________________________________________________________
Softmax (Dense)              (None, 2)                 6         
=================================================================
Total params: 27,530
Trainable params: 27,530
Non-trainable params: 0

The summary shows that the paramters were not used.
Then I found an answer to this question. By using model.add(), the paramters in quantum circuit can be used. Like this:

input   = tf.keras.layers.Input(shape=(2,256))
Dense_1 = Dense(256, activation='relu', kernel_initializer='glorot_uniform', name='dense_1')
Drop_1 = Dropout(0.5, name='drouput_1')
Dense_2 = Dense(300, activation='relu', kernel_initializer='glorot_uniform', name='dense_2')
Drop_2 = Dropout(0.5, name='drouput_2')
Dense_3 = Dense(200, activation="relu", kernel_initializer='glorot_uniform', name='dense_3')
Drop_3 = Dropout(0.5, name='drouput_3')
Dense_4 = Dense(150, activation="relu", kernel_initializer='glorot_uniform', name='dense_4')
Flat_1 = Flatten()
Dense_5 = Dense(24, activation="relu", kernel_initializer='glorot_uniform', name='dense_5')
QNN_1   = qml.qnn.KerasLayer(circuit, weight_shapes, output_dim=2, name="QNN_1")
output  = Dense(2, activation="softmax", name='Softmax')

model = tf.keras.models.Sequential()
model.add(input)
model.add(Dense_1)
model.add(Drop_1)
model.add(Dense_2)
model.add(Drop_2)
model.add(Dense_3)
model.add(Drop_3)
model.add(Dense_4)
model.add(Flat_1)
model.add(Dense_5)
model.add(QNN_1)
model.add(output)

But I don’t want to change the way to build network. Because model.add() will be very inconvenient for my follow-up tasks. So is there any way to solve the question without using model.add()? :smiley:

And, finally, make sure to include the versions of your packages. Specifically, show us the output of qml.about().
Platform info: Windows-10-10.0.22621-SP0
Python version: 3.9.0
Numpy version: 1.20.0
Scipy version: 1.10.1
Installed devices:

  • default.gaussian (PennyLane-0.28.0)
  • default.mixed (PennyLane-0.28.0)
  • default.qubit (PennyLane-0.28.0)
  • default.qubit.autograd (PennyLane-0.28.0)
  • default.qubit.jax (PennyLane-0.28.0)
  • default.qubit.tf (PennyLane-0.28.0)
  • default.qubit.torch (PennyLane-0.28.0)
  • default.qutrit (PennyLane-0.28.0)
  • null.qubit (PennyLane-0.28.0)
  • lightning.qubit (PennyLane-Lightning-0.30.0)

Hey @AHHil,

When you use Sequential, you can also pass it a list:

model = tf.keras.models.Sequential([layer1, layer2, layer3])

See the documentation for more details: tf.keras.Sequential  |  TensorFlow v2.13.0

Let me know if that helps!

1 Like

I’m sorry that I didn’t explain clearly my question. I want to reserve

model = keras.Model(inputs=input, outputs=output)

because I need the output of each layer. If I use

model = tf.keras.models.Sequential()

In this way, I can’t get the output of each layer. I need to process these output such as x1,x2,x3 in my follow-up tasks.
I don’t know if there is any way that can realize my demand by using Sequential()?

I have solved the problem. According to https://discuss.pennylane.ai/t/quantum-neural-networks-quanvolutional-nn/1111/7 and https://discuss.pennylane.ai/t/transfer-learning-with-pretrained-keras-model/1462/10. I add this code

model(X_train[:2])

behind

model = keras.Model(inputs=input, outputs=output)

Althought I don’t know the reason, it works. :smiley:

1 Like

Awesome! Glad you were able to figure it out :slight_smile: