Hybrid model: mismatch between output of Qnode and input of next classical layer

Hello

I have a Hybrid (classical - quantum) model for classification:

1st layer is a classical layer with 6-neurons
2nd layer is a standard quantum node with 6-qubits
3d decision layer is a 2-neuron classical layer with a sigmoid function

The Quantum node is the same as in the tutorial “https://pennylane.ai/qml/demos/tutorial_qnn_module_tf.html” but with 6 qubits.

It works fine but i would expect it not to work. Instead it makes sense to insert a classical layer of 6-neurons prior to the final decision layer.

My question is, should the output dimensions of the Qnode match the dimensions of the next classical layer? and if there is a mismatch how pennylane deals with it? thank you very much in advance!!!

Hi @NikSchet, thanks for the question :slight_smile:

In general, you’d likely want the output dimensions of the QNode to match the dimensions of the next classical layer. So if you had a final layer with 2 neurons, you would want to pass a two-dimensional tensor to that layer. You can create such a two-dimensional tensor from a QNode by having it return two measurements (e.g., expectation values of different observables). Note that it is not required that just because you have 6 qubits that the output of your QNode needs necessarily be 6-dimensional.

If you are able to provide a (condensed) version of your code, in particular showing the layer structure, it would potentially help us give a more specific answer :slight_smile:

Ok let me try to provide the architecture:

I use this as a Qnode:

@qml.qnode(dev, interface="tf", grad_method="backprop")
def qnode(inputs, weights):
    for i in range(blocks):
        qml.templates.AngleEmbedding(inputs, wires=range(n_qubits))
          qml.templates.StronglyEntanglingLayers(weights[i], wires=range(n_qubits)) #STRONGLY ENTANGLING LAYERS
    return [qml.expval(qml.PauliZ(i)) for i in range(n_qubits)]

The layers are:

clayer1 = tf.keras.layers.Dense(16, activation=“relu”)
clayer2 = tf.keras.layers.Dense(6, activation=“relu”)
qlayer = qml.qnn.KerasLayer(qnode, weight_shapes, 6)
clayerD = tf.keras.layers.Dense(1, activation=“sigmoid”)

And the model is defined as:

model = tf.keras.models.Sequential([clayer1,clayer2,qlayer,clayerD])
So dimensions are (16neurons,6neurons,6qubits,1neuron)

In this case can you explain the procedure in which the outcome of the qlayer (6 qubits) is imported into the decision layer (1 neuron) ?

so your suggestion should be to use :

model=tf.keras.models.Sequential([clayer1,clayer2,qlayer,clayer2,clayerD])

Thank you for the fast reply!

Hi @NikSchet,

Thanks for providing a snippet of your code. From this, I can see that you have properly specified a 6-dimensional output_dim to your qml.qnn.KerasLayer, which matches the output delivered by your QNode.

As for any “magic” on how things get connected together, this is actually all happening in Keras. From what I understand by looking at their docs, you only need to provide an output shape, and the input shape is inferred automatically. So in this case, when you do tf.keras.layers.Dense(1, activation="sigmoid"), you are saying that you want a dense layer with a 1D output.

The necessary input shape is put together under the hood by Keras by looking at what layer proceeds clayerD in the input of Sequential (in your case, a layer with output dimension 6). So Keras infers that your final dense layer will have shape (6,1).

model=tf.keras.models.Sequential([clayer1,clayer2,qlayer,clayer2,clayerD])

Nope, I didn’t really suggest that. What you were already doing seems to be correct, even if you didn’t quite know the details of what happens under the hood :slight_smile:

1 Like

ok noted! Thank you very much for your time! much appreciated :slight_smile: