Hybrid Quantum-Classical network

Hello all i am using this code with very nice results

     nqubits=2
    device = qml.device('default.qubit', wires=nqubits)

    # Define QNode
    @qml.qnode(device)
    def qnode(inputs, weights):
    qml.templates.AngleEmbedding(inputs, wires=range(nqubits))
    qml.templates.StronglyEntanglingLayers(weights, wires=range(nqubits))
        return [qml.expval(qml.PauliZ(i)) for i in range(nqubits)]


    # define weight_shapes
    weight_shapes = {"weights": (n_layers, nqubits, 3)}
    # Define inputs and qnode trainable weights
    qlayer = qml.qnn.KerasLayer(qnode, weight_shapes, output_dim=nqubits)
    clayer = tf.keras.layers.Dense(output_size)
    model = tf.keras.Sequential([qlayer, clayer])

    opt = tf.keras.optimizers.SGD(learning_rate=0.05)
    model.compile(opt, loss='binary_crossentropy',metrics=['accuracy','mse', 'mae',"binary_accuracy"])

I am wondering if i can just alter the Classical layer so i can have a more options. So can i use something like this for clayer? (note dropout)

NN = Sequential()
NN.add(Dense(5, input_dim=d, kernel_initializer='uniform', activation='relu'))
NN.add(Dropout(rate=dropout_rate))
NN.add(Dense(2, kernel_initializer='uniform', activation='relu'))
NN.add(Dropout(rate=dropout_rate))
NN.add(Dense(2, activation='sigmoid'))
clayer = NN

model = tf.keras.Sequential([qlayer, clayer])

Upon implementation of the code above i get some results but the sketch is having difficulties in converging, maybe it is not running as it was suppose to run?

If it is not possible maybe for backprogatation issues how can i implement such a classical layer?

Thank you very much for the support

Hi @NikSchet,

Thanks so much for your question! :slightly_smiling_face:

That sounds odd indeed and it might need further investigation to uncover what exactly is happening here. The new classical layer would not be expected to cause a difference here when using backpropagation.

One approach could be trying to interpolate between the clayer defined using NN (including dropout) and the single layer. E.g., tying 2 simple layers and see how it trains, maybe that way it can be uncovered what is causing the difficulty at least (maybe dropout?).

Hope this helps with a bit of direction!

Thank you very much for your answer. So problem is that classical layer needs too many epochs to converge (1000 epochs,) as a result it makes no sense to try this specific approach because in a quantum processor unit that would require too many processing time. And i guess this is the reason why you have the tranfer learning demo in which you pre-train the classical model and then you apply a quantum node. (any ideas what transformations should i do to that demo to apply my classical neural network?)

So what i am trying to do is similar to the transfer learning method.

  1. I pre train only the classical neural network and save it
  2. Ι use the hybrid code with the saved neural network

This is like a jump start for the code to converge faster.

Hi @NikSchet,

Transfer learning does sound like an exciting approach, and should be promising.

One thing I noticed is that your model is a quantum layer followed by a classical layer. This is slightly different to the transfer learning demo we have where we have a classical layer feeding into a quantum one. I don’t think is should be a problem though, although the link to “transfer learning” as a concept may not be as strong.

In terms of getting it to work, it should be a case of:

  • Creating a purely classical model and training it.
  • Creating a second hybrid model composed of the quantum layer and then the second half of the previous model.
  • Ensuring that the parameters of the classical part of the hybrid are initialized to match the classical trained model (you could just reuse the layers).
  • Ensuring that the classical parameters are not trained, which I believe should be a case of setting the trainable property for each layer.
2 Likes

Τhank you very much. Actually you did a very good observation the order of layers play a significant role in my code.

So if use first the classical pretrain model and then the quantum node you get much much higher accuracy.

I have one more weird question , so for my hybrid network i use

`modelh = tf.keras.Sequential([saved_model,qlayer,])

What if i use something repeatedly like this

`modelh = tf.keras.Sequential([qlayer,qlayer,qlayer,qlayer,])

Wouldn’t that be equivalent to the data-reuploading classifier?!?

Hi @NikSchet,

One thing to keep in mind is that both the input and the output of qml.qnn.KerasLayer, is classical information (e.g., for a vector of data to be encoded we obtain the expectation value of an Hermitian operator, measurement outcomes, etc.). Contrary to this we can consider quantum layers as unitary transformations making up a quantum circuit where the input of the layer is a quantum state and the output is a quantum state as well. In PennyLane such layers are defined in the QNode.

The data-reuploading classifier could be expressed using a QNode which is later used with a single qml.qnn.KerasLayer. We have a cost function that uses the output of the QNode (the expectation value of an Hermitian operator that corresponds to the fidelity of two states). In particular note, that the layers mentioned for the data-reuploading classifier are unitaries of the same quantum circuit (and QNode).

1 Like