Logical Question regarding ensemble classification notebook

In ensemble classification, they uses two different QPU’s circuit(only change in layer of CNOT gates) run on two different simulators.
When I try to replicate that on different datasets, I found accuracy are same from both simulators. My Question is:-
1). How different connections in CNOT make huge changes when all rotational gate and their weights are same in both simulators?
2). If I apply same QPU circuit on different simulators, does it make any difference in accuracy?
3). What is cons of ensemble classification? Since, it is very good in multi-class classification then why there haven’t been much research on it( I may be missed it)?

Please share your experience and how it can be make better from classical methods…

Hello @Kshitij_Dave,

  1. For classification tasks it is typically the embedding layer that is the most important part of the circuit. Talks by Maria Schuld and go into this in more detail

A Seth Lloyd, et al. paper describes embeddings in detail: https://arxiv.org/pdf/2001.03622.pdf

Regarding CNOT gates in specific, Los Alamos National Lab published a couple of papers that describe limitations of entanglement and barren plateaus/local minima: Challenges and opportunities in quantum machine learning | Nature Computational Science, Subtleties in the trainability of quantum machine learning models | Quantum Machine Intelligence

1 Like

Hi ,

Thank you for sharing all of these great resources @kevinkawchak !
To complement your answer, here are my thoughts on @Kshitij_Dave’s questions.

  1. CNOTs working together with other gates allow for quantum entanglement between qubits, giving the power to the computation. So even if the other gates and weights are all the same, CNOTs make a huge difference in how the circuit works.
  2. Running the same circuit on different simulators should work the same if the simulation parameters are the same. Notice that in some cases there are random numbers being created under the hood in these simulators so you should take this into account, and in some cases the simulators include noise so they won’t show the same result as an ideal simulator. Also be careful to specify the same number of shots for both simulators. Even computer numerical precision can come to add a very slight difference between simulators. Remember that simulators are programs running on classical machines, which are trying to simulate how a quantum computer would behave. If the way these simulators are built is different, it’s possible that you can get a different answer even if they should be the same in theory.
  3. Ensemble classification can use a lot of classical and quantum computational resources. It’s unclear to me whether there will be a practical advantage of using this method compared to purely classical classification. This is sounds like an interesting research question though so please let us know here if you get any further insights on this!
1 Like

I have question regarding your 2nd point –
I think the point of ensemble classification is to see which simulators performs better classification on different instances. If having same circuit on different simulators give exactly same result, then how can one make decision? What is the difference between using 2 simulators (or more) and apply different circuit, when u can apply different circuit on same (one) simulator ?

I will surely go through all the materials. Thank you for providing detailed references.

Also I have one basic question, is there any effect of applying Pauli - X,Y and Z gate for circuit embedding? Its understandable the usage of Rotational and CNOT gate…

Hello @Kshitij_Dave,
A good overview of embeddings in general can be found on Page 3: Kevin Kawchak on LinkedIn: Medical Developer Resources for Quantum Machine Learning

180 degree Pauli X, Y, and Z are less common than rotational gates for embeddings. A study was done on several embedding layers found in literature with multiple Rotational Paul X, Y, and Z gate combinations. Kevin Kawchak on LinkedIn: Quantum ML Algorithms Prototyping for Neuroradiology

Hi @Kshitij_Dave ,

The goal of the demo was to show how combining two QPUs could be useful. In this case the demo uses simulators to show this. Notice the phrase in the Define model section where it says “a different circuit is enacted for each device with a unique set of trainable parameters”. This means that it’s not the same circuit on both simulators, they’re different circuits being combined together.

These are valid embedding options.

The embedding you choose will determine how well the circuit will be able to find patterns in your data. It’s all about how your quantum data gets translated into a quantum state. There’s no one-size-fits-all, so for different datasets and different problems a different embedding may be more useful than the other.

You can learn more about embeddings in our glossary page. Or read this blog post on embeddings to learn more. Then try out PennyLane’s different embedding templates!