Classical-Quantum Transfer learning

Hi !! , How is it beneficial to run on IBM’s real quantum device than Pennylane noiseless simulators? Ultimately only the final weights matter. Pennylane Simulators generate a model with better accuracy, so the generated weights can perform the classification.

1 Like

Hi @Harshit_Mogalapalli,

That’s an interesting question - I think it goes beyond just transfer learning, and so hopefully I’ve interpreted it correctly. It really depends what you want to achieve, and whether you are looking to solve a problem or do something more exploratory or proof-of-concept on a quantum computer. It’s a matter of perspective, too, so I can give mine here.

While noiseless simulators may give more accurate / exact results on small problems we run now, in the long run it will simply be computationally intractable to run simulations for “life size” problems (even the best simulators today run on a max of 50-60-qubit circuits of limited depth, and require enormous supercomputers to do so). If you need to run a quantum computing problem now of a size that is manageable on your computer, then a simulator may be the way to go - but thinking for the future, you will need the actual quantum hardware.

The hardware we have today is noisy, and while it won’t yield as accurate results as a regular simulator, it is still interesting to run on. We won’t know how to make the hardware better if we don’t use it, and through running on noisy hardware we gain insight not only in how to improve the hardware, but also how to improve things on the algorithms side (consider all the near-term / NISQ algorithms that have arisen because of the constraints of noisy hardware, and things like benchmarking, error mitigation methods, and modelling of the hardware noise itself). Furthermore, the hope of the field is to eventually reach full fault-tolerant quantum computing where everything is fully error-corrected, and we will be able to obtain just as accurate results as a regular simulator, but with a quantum computational advantage.

Please feel free to reach out with further questions!

2 Likes

Thanks @glassnotes for and elaborative answer and sharing your perspective. Its quite helpful.

I have a follow up question, for a classification task where accuracy is important we can make use of classical quantum transfer learning using pennylane’s noiseless simulator to generate a good model right?

I have a follow up question, for a classification task where accuracy is important we can make use of classical quantum transfer learning using pennylane’s noiseless simulator to generate a good model right?

Hi @Harshit_Mogalapalli, yes, absolutely! There is a good example of this in one of our demos. You can use that code as a starting point and adapt it to whatever problem you’re working on. (Though note that to get more accurate results you’ll likely have increase the number of training epochs).

1 Like

Thank you very much @glassnotes .

Hi!

In classical-quantum transfer learning, run using this model, when the training is done with a greater number of epochs, the training accuracy is less than validation accuracy for all the epochs. But generally shouldn’t the training accuracy be greater than validation accuracy?

1 Like

Hi @mahesh, thanks for sharing your findings!

Generally, it is not a good sign that training accuracy is less than validation accuracy.

There could be a few reasons this is happening. A common reason is that there is more complexity in the training data than in the validation data.

  • It could be an indication that your validation data set is too small.

  • Are you using data augmentation in your set up? If so you’ll need to make sure both the training and validation data are being augmented

  • Are you using regularization methods such as L2 , L1 or Dropout layers? This introduces noise in the training phase which is not present in the validation phase so reducing the effect of these may resolve this.

Feel free to share some examples of your code and let us know how it goes!

2 Likes

Thanks a lot @Ant_Hayes for your reply.

Hi,
Is there any particular reason why the variational layer is defined as series of entangling and rotational gates?

2 Likes

Hey @Surya_Kiran_Vamsi and welcome to the forum!

Does your question refer to @mahesh’s model, or to the transfer learning code example? If it does not directly refer to the previous discussion, could I kindly ask you to open a new thread? You can also explain your question with a bit more context there, so other users understand and find it…Thanks!