- I see in the original paper that “the complexity of the circuit increases linearly with the size of input space”. Does that mean, if I have thousands of input features (classical features), there will be thousands of gates? Is this usable for real QC?
I believe so, yes. Trying to understand the best way to encode classical data into a quantum system is still very much a research question, however you may be interested in the paper Quantum embeddings for machine learning, where variational circuit techniques are used to find an optimum embedding for classification.
- My experiment indicates that re-uploading classifier might work better than variational classifiers or the QNN-based classifier, especially for the nonlinear cases. Is this a general observation?
This is definitely something recent research my suggest. For example, this tutorial on expressivity of quantum models delves into this topic in more detail (and is based on the paper The effect of data encoding on the expressive power of variational quantum machine learning models by Schuld, Sweke, and Meyer).
Have you tried downloading the data re-uploading tutorial, and modifying it to classify simple datasets (such as moons?).
- Currently I reached a 90% training accuracy. To further imrpove the performance, I am looking at combing the quantum natural gradient for training. Any suggestions?
You can give the
QNGOptimizer a go, however there is currently a restriction that the input cost function to the optimizer must be a single QNode – there can be no classical processing. This might make it difficult integrating QNG into a model with a more complex cost function.
However, this is something we are hoping to extend and generalize in a future release of PennyLane.