I am implementing the re-uploading classifier and have a few questions:
I see in the original paper that “the complexity of the circuit increases linearly with the size of input space”. Does that mean, if I have thousands of input features (classical features), there will be thousands of gates? Is this usable for real QC?
My experiment indicates that re-uploading classifier might work better than variational classifiers or the QNN-based classifier, especially for the nonlinear cases. Is this a general observation? Does it benefit from the encoding of feature data with the unitary gates? Please correct me if I am wrong.
Currently I reached a 90% training accuracy. To further imrpove the performance, I am looking at combing the quantum natural gradient for training. Any suggestions?
I see in the original paper that “the complexity of the circuit increases linearly with the size of input space”. Does that mean, if I have thousands of input features (classical features), there will be thousands of gates? Is this usable for real QC?
I believe so, yes. Trying to understand the best way to encode classical data into a quantum system is still very much a research question, however you may be interested in the paper Quantum embeddings for machine learning, where variational circuit techniques are used to find an optimum embedding for classification.
My experiment indicates that re-uploading classifier might work better than variational classifiers or the QNN-based classifier, especially for the nonlinear cases. Is this a general observation?
This is definitely something recent research my suggest. For example, this tutorial on expressivity of quantum models delves into this topic in more detail (and is based on the paper The effect of data encoding on the expressive power of variational quantum machine learning models by Schuld, Sweke, and Meyer).
Have you tried downloading the data re-uploading tutorial, and modifying it to classify simple datasets (such as moons?).
Currently I reached a 90% training accuracy. To further imrpove the performance, I am looking at combing the quantum natural gradient for training. Any suggestions?
You can give the QNGOptimizer a go, however there is currently a restriction that the input cost function to the optimizer must be a single QNode – there can be no classical processing. This might make it difficult integrating QNG into a model with a more complex cost function.
However, this is something we are hoping to extend and generalize in a future release of PennyLane.
you may be interested in the paper Quantum embeddings for machine learning, where variational circuit techniques are used to find an optimum embedding for classification.
Does it mean I can try to combine feature embedding with re-uploading?
Have you tried downloading the data re-uploading tutorial, and modifying it to classify simple datasets (such as moons?).
Yes, I have tried it. Re-uploading works efficiently for simple cases. But we have some specific datasets. Although I tried to extend re-uploading to 2 qubits, the training precision still only reaches 90%.
Besides, I am very confused about how to design the quantum circuits for improving its performance (either QNN or re-uploading ). The only way I know is using more layers… Do we need to pick up specific gates/layers for our specific problems? Any suggestions?
And thank you for your suggestion. I will try QNG.
Does it mean I can try to combine feature embedding with re-uploading?
Just to be clear on terminology - data re-uploading can refer to the specific context in the paper, or it can just refer to repeating a feature map or feature embedding (this is the same thing in that a feature map “embeds” the data and hence gives rise to an embedding). And sure, in principle you can repeat any embedding - but if you are working on a research project (as opposed to just playing around) it’s always good to use small examples and understand what the embedding actually does and what repetitions may change…
I think @josh was more pointing you at other papers that may talk about similar things, no guarantee that ideas in the embeddings paper will improve your test accuracy - and training in the framework explored there is very costly (just as a warning).
Again, compared to neural nets variational quantum classifiers are much less developed yet and still have to prove themselves - which is why they are so exciting. Your question about how to build the circuit for a specific problem and what feature maps and optimisers to use are all subject to ongoing research - in other words, there is no official recipe yet, and we don’t even know if they work at all for the problem you are interested in Good luck!
Thanks for the explanations! Btw, I have a simple question (might be silly ). Where does the nonlinearity come from in QNN? My understanding is that:
Feature embedding layer introduces nonlinearity.
Unentangled parametrical gates do not introduce any nonlinearity, is this right?
Entangling is nonlinear. But can it involve strong nonlinearity? Let’s say, by simply adopting several StronglyEntanglingLayers, is it possible to mimic a strongly nonlinear boundary in classification?
Measurements introduce nonlinearity. I see this in papers but actually I didn’t understand the logic.
I would really appreciate it if you could briefly explain, or guide me to some references. Thanks in advance.
That’s a very valid question, but may I ask you to open a new thread for it? That will make it easier to find for others, since this threat will have a misleading header.