I have a Hybrid (classical - quantum) model for classification:
1st layer is a classical layer with 6-neurons 2nd layer is a standard quantum node with 6-qubits 3d decision layer is a 2-neuron classical layer with a sigmoid function
It works fine but i would expect it not to work. Instead it makes sense to insert a classical layer of 6-neurons prior to the final decision layer.
My question is, should the output dimensions of the Qnode match the dimensions of the next classical layer? and if there is a mismatch how pennylane deals with it? thank you very much in advance!!!
In general, you’d likely want the output dimensions of the QNode to match the dimensions of the next classical layer. So if you had a final layer with 2 neurons, you would want to pass a two-dimensional tensor to that layer. You can create such a two-dimensional tensor from a QNode by having it return two measurements (e.g., expectation values of different observables). Note that it is not required that just because you have 6 qubits that the output of your QNode needs necessarily be 6-dimensional.
If you are able to provide a (condensed) version of your code, in particular showing the layer structure, it would potentially help us give a more specific answer
@qml.qnode(dev, interface="tf", grad_method="backprop")
def qnode(inputs, weights):
for i in range(blocks):
qml.templates.AngleEmbedding(inputs, wires=range(n_qubits))
qml.templates.StronglyEntanglingLayers(weights[i], wires=range(n_qubits)) #STRONGLY ENTANGLING LAYERS
return [qml.expval(qml.PauliZ(i)) for i in range(n_qubits)]
Thanks for providing a snippet of your code. From this, I can see that you have properly specified a 6-dimensional output_dim to your qml.qnn.KerasLayer, which matches the output delivered by your QNode.
As for any “magic” on how things get connected together, this is actually all happening in Keras. From what I understand by looking at their docs, you only need to provide an output shape, and the input shape is inferred automatically. So in this case, when you do tf.keras.layers.Dense(1, activation="sigmoid"), you are saying that you want a dense layer with a 1D output.
The necessary input shape is put together under the hood by Keras by looking at what layer proceeds clayerD in the input of Sequential (in your case, a layer with output dimension 6). So Keras infers that your final dense layer will have shape (6,1).
Nope, I didn’t really suggest that. What you were already doing seems to be correct, even if you didn’t quite know the details of what happens under the hood