I am reading the paper Quantum embeddings for machine learning. I remember that the previous demo of Pennylane contained the code of this article, but recently opened it and found that it is gone. Is it officially deleted?
Hey @Cc1! Yep, this demo was deleted.
Thank you very much for your reply, so where else can I find the demo code of quantum metric learning? Thanks.
The demo isn’t available anywhere to my knowledge.
Let me see if I can figure out why @Cc1!
Thank you very much for your help!
This is entirely my fault, I realised that the demo contained a bug (I accidentally used training data for testing, uuuh!). Once fixed, the model we showcased, adding a small quantum feature map to a big classical model, hopelessly overfitted. I simply did not have the time to look into this, and it seemed to be a deeper problem that needs some more research.
Since there are still requests for that demo I will have another look at the problem in the next 1-2 months and let you know in this thread if we can upload it again.
Hope that helps!
Thank you very much for your help. Many of your academic papers have been very inspiring to me.
That’s too kind, thanks
I have some bad (yet interesting) news. I spent quite a bit of time to try and fix the metric learning demo, and consistent with my previous attempts I just cannot get around the issue of overfitting without changing the architecture considerably, even after all problems with the data seem fixed.
I went back to the original code from the paper’s simulations and found that this wasn’t picked up because we had the same bug in the datasets - the test set was accidentally a copy of the training set (uuh, I know!). The observation from the paper is still valid for the training set: the classifier learns to arrange the data on a periodic grid to suit the periodic nature of the embedding into Pauli gates, which is somehow cool.
In the end I have to conclude that metric learning in this setting just does not work. It is a very different cost objective from the usual l2-loss, and the literature mentions problems with overfitting (i.e. here). This might be even worse in the case of the demo where we are trying to learn from ~150 examples using well over 1000 parameters.
In short, I will close the PR and we won’t re-add the demo. (While a negative example of something not working would be great to showcase that QML research is not as easy as people suggest it to be, I think it could confuse early-stage users who try to copy the bad practice.)
If you want to try and play with the code a bit, you can just download or copy the file
tutorial_embeddings_metric_learning.py. It is written in rst format and there are ways to convert into a jupyter notebook.
Hope this all helps - it is a nice example of how research does not always give us the results we want
Thank you for the detailed update on your work exploring quantum metric learning. I appreciate you taking the time to thoroughly investigate this method and identify the challenges with overfitting on more complex datasets.
While disappointing that quantum metric learning did not live up to its initial promise on the dataset from the paper, I think this is an important finding. As you mentioned, negative results are just as crucial to document as positive ones on the path to developing reliable quantum machine learning techniques.
It is worth noting that although the quantum metric learning approach faces difficulties with overfitting in the paper’s hybrid quantum-classical embeddings, it has shown promising results on relatively simple datasets in my experiments. In addition, I found that the experimental results were slightly improved after using a more complex ResNet to extract features.
Thank you again for your persistence and for sharing these insights. Research can be frustrating when findings contradict expectations, but uncovering these kinds of issues will ultimately strengthen the field. I appreciate you taking the time to investigate this thoroughly and provide clear documentation for future reference.
Pleasure, and I very much agree with you, more honesty on negative results would help research a lot!
I’m currently working on a follow-up and was wondering if your team ever tried multiple classes and if so what were your observations?
If I’m understanding your question correctly, there are a lot of studies that demonstrate QML models as multi-class classifiers. Here’s a good example from a demo we have:
Tough to say what the general observations are on the whole, but I can say that it’s very possible to train QML models on multi-class data