Hybrid Quantum-Classical network

Hello all i am using this code with very nice results

     nqubits=2
    device = qml.device('default.qubit', wires=nqubits)

    # Define QNode
    @qml.qnode(device)
    def qnode(inputs, weights):
    qml.templates.AngleEmbedding(inputs, wires=range(nqubits))
    qml.templates.StronglyEntanglingLayers(weights, wires=range(nqubits))
        return [qml.expval(qml.PauliZ(i)) for i in range(nqubits)]


    # define weight_shapes
    weight_shapes = {"weights": (n_layers, nqubits, 3)}
    # Define inputs and qnode trainable weights
    qlayer = qml.qnn.KerasLayer(qnode, weight_shapes, output_dim=nqubits)
    clayer = tf.keras.layers.Dense(output_size)
    model = tf.keras.Sequential([qlayer, clayer])

    opt = tf.keras.optimizers.SGD(learning_rate=0.05)
    model.compile(opt, loss='binary_crossentropy',metrics=['accuracy','mse', 'mae',"binary_accuracy"])

I am wondering if i can just alter the Classical layer so i can have a more options. So can i use something like this for clayer? (note dropout)

NN = Sequential()
NN.add(Dense(5, input_dim=d, kernel_initializer='uniform', activation='relu'))
NN.add(Dropout(rate=dropout_rate))
NN.add(Dense(2, kernel_initializer='uniform', activation='relu'))
NN.add(Dropout(rate=dropout_rate))
NN.add(Dense(2, activation='sigmoid'))
clayer = NN

model = tf.keras.Sequential([qlayer, clayer])

Upon implementation of the code above i get some results but the sketch is having difficulties in converging, maybe it is not running as it was suppose to run?

If it is not possible maybe for backprogatation issues how can i implement such a classical layer?

Thank you very much for the support

Hi @NikSchet,

Thanks so much for your question! :slightly_smiling_face:

That sounds odd indeed and it might need further investigation to uncover what exactly is happening here. The new classical layer would not be expected to cause a difference here when using backpropagation.

One approach could be trying to interpolate between the clayer defined using NN (including dropout) and the single layer. E.g., tying 2 simple layers and see how it trains, maybe that way it can be uncovered what is causing the difficulty at least (maybe dropout?).

Hope this helps with a bit of direction!

Thank you very much for your answer. So problem is that classical layer needs too many epochs to converge (1000 epochs,) as a result it makes no sense to try this specific approach because in a quantum processor unit that would require too many processing time. And i guess this is the reason why you have the tranfer learning demo in which you pre-train the classical model and then you apply a quantum node. (any ideas what transformations should i do to that demo to apply my classical neural network?)

So what i am trying to do is similar to the transfer learning method.

  1. I pre train only the classical neural network and save it
  2. Ι use the hybrid code with the saved neural network

This is like a jump start for the code to converge faster.

Hi @NikSchet,

Transfer learning does sound like an exciting approach, and should be promising.

One thing I noticed is that your model is a quantum layer followed by a classical layer. This is slightly different to the transfer learning demo we have where we have a classical layer feeding into a quantum one. I don’t think is should be a problem though, although the link to “transfer learning” as a concept may not be as strong.

In terms of getting it to work, it should be a case of:

  • Creating a purely classical model and training it.
  • Creating a second hybrid model composed of the quantum layer and then the second half of the previous model.
  • Ensuring that the parameters of the classical part of the hybrid are initialized to match the classical trained model (you could just reuse the layers).
  • Ensuring that the classical parameters are not trained, which I believe should be a case of setting the trainable property for each layer.
3 Likes

Τhank you very much. Actually you did a very good observation the order of layers play a significant role in my code.

So if use first the classical pretrain model and then the quantum node you get much much higher accuracy.

I have one more weird question , so for my hybrid network i use

`modelh = tf.keras.Sequential([saved_model,qlayer,])

What if i use something repeatedly like this

`modelh = tf.keras.Sequential([qlayer,qlayer,qlayer,qlayer,])

Wouldn’t that be equivalent to the data-reuploading classifier?!?

Hi @NikSchet,

One thing to keep in mind is that both the input and the output of qml.qnn.KerasLayer, is classical information (e.g., for a vector of data to be encoded we obtain the expectation value of an Hermitian operator, measurement outcomes, etc.). Contrary to this we can consider quantum layers as unitary transformations making up a quantum circuit where the input of the layer is a quantum state and the output is a quantum state as well. In PennyLane such layers are defined in the QNode.

The data-reuploading classifier could be expressed using a QNode which is later used with a single qml.qnn.KerasLayer. We have a cost function that uses the output of the QNode (the expectation value of an Hermitian operator that corresponds to the fidelity of two states). In particular note, that the layers mentioned for the data-reuploading classifier are unitaries of the same quantum circuit (and QNode).

1 Like

Hello,

  1. Is it possible now to implement quantum specific dropout variants in PennyLane on A) layers or B) qubits of parameterized quantum circuits?

  2. Will there be a new PQC model to include customizations directly to quantum circuits analogous to advanced deep learning techniques?

References:
a) Overfitting in quantum machine learning and entangling dropout | SpringerLink
b) [1804.00633] Circuit-centric quantum classifiers

Hi @kevinkawchak , thank you for your question. Let me check with the team and get back to you.

Hey @kevinkawchak !
Regarding to the first question: In PennyLane, the first thing you do is define a device with a number of shots. Even if you could randomly choose whether to set certain CNOTs gates, all shots would be executed with respect to the same configuration. If you really want to sometimes place them and sometimes don’t, you would have to run several circuits of one shot and post process the results. In the short term I would say this is the only way.

Could you give more details about 2?

Hello @Guillermo_Alonso,
Such as freezing earlier layers of quantum circuits as training progresses.

In that case, I would take a look at this optimizer.
Instead of freezing the initial layers, you step by step increase the depth of the circuit. I would say the results should be equivalent :smile:

1 Like

Thank you for the info.

Hello,

From Huynh, L., et al. 2023 “Quantum-Inspired Machine Learning: a Survey”, the authors state:

“In the case of QVAS models, comparable performance is often only observed when classical models are deliberately scaled back in terms of architecture size, the number of parameters and/or number of input features.”

a) Is there an anticipated break-even point for reaching a number of quantum algorithm parameters to outperform classical deep neural networks?
b) Are there current studies where quantum variational algorithms at lower numbers of parameters are the exclusive way to assist deep learning networks for speedups or other improvements? Thank you.

Reference:

Hi @kevinkawchak ,

Great questions. At the moment classical machine learning is more powerful than quantum machine learning for almost everything. There are some specific cases where quantum computers are more powerful but this is usually only the case for toy problems at the moment.

This blog post by Maria Schuld can give you an excellent perspective on this topic.

At the moment researchers over the world are looking into understanding quantum machine learning better in order to find potential for advantage. As far as I know there’s no anticipated break-even point. The question itself is not the best because in quantum algorithms don’t necessarily perform better with more parameters, and there are so many other aspects that change in a quantum algorithm that it’s not necessarily a good way to compare the two.

That being said, the demo on quantum advantage in learning from experiments, based on a paper cited at the beginning of the demo, shows one of these specific examples where quantum algorithms can have an advantage over classical ones.

1 Like

Hello,

Do any of the 6 Numpy quantum specific optimizers act in ways to change the circuit over the course of the run?

Reference:
https://docs.pennylane.ai/en/stable/introduction/interfaces.html

Hi @kevinkawchak ! The Adaptive Optimizer modify the circuit during the training.

In the case of this optimizer, if you do step_and_cost, 3 values are returned instead of the usual 2. The first one refers to the circuit in that particular iteration.

I hope that helps!

1 Like

Hello @Guillermo_Alonso, is there a ML demo that AdaptiveOptimizer would work good in?

At the moment we only show this feature in this chemistry demo.
However, you could adjust it to another problem of interest in QML :slight_smile:

Hello Guillermo,
I’m looking to use a specific quantum circuit for one mini-batch, a different quantum circuit for the next mini-batch, and other quantum circuits for the remainder of the ml workflow. The demo and documentation appear to be for finding the best algorithm, but I’m not seeing the tools I would need to use several different circuits systematically throughout a run.