As a part of my code I want to sample some measurements from the circuit that is later fed into a classical neural network. To make sure that the samples make sense, I tried to also plot the results from qml.probs together with the probability distribution from the samples using a histogram. Through repeated applications I get systematic errors. An example is the following:
I expect the amplitudes between probs and samples to mismatch since we are merely sampling, however, the positioning of the samples and probs should align, right?
I have tried to address the issue, thinking first that the way I convert binary strings from the samples to integers might be incorrect, however, I believe that routine is correct. I also wondered if bit-ordering might have a role to play, but found that changing the ordering did not solve the issue.
In the code I essentially perform a training routine of a QAOA circuit, them perform samples from them using the device “devShot”. After, using the device “devExact” I perform a call to the quantum circuit to return the probabilities of measuring each computational basis state.
The only thing I can think of at the moment is that reshaping the torch-tensor might affect the results, but that seems doubtful. I am not sure wether I have overlooked something very obvious in my implementation or if there are some aspects that I have misunderstood about sampling, qubit ordering and the qml.probability function.
Hi @Viro,
Your approach makes sense. But in the context of stochastic calculations, we’ll always have differences, especially for a small number of samples. If you run several times, you can see that they are always similar, like in you image, and this is already a good sign.
What we could call a good (large) number of samples is related, in this case, with the number of qubits (or wires). In other words, decreasing the number of qubits and keeping the same number of shots (because of memory) can improve the convergence. Look how cool your results are for 6 qubits (G = CreateRegularGraph(6,5)):
Right, I get that and I did indeed notice the trend you mentioned in the image with the lower qubit count. I just find it weird that one would get as high of a sample count (orange bars) at positions where they are less likely to occur. From the images in the first post it would almost seem as if though the large orange bars represent the large blue bars, however that they are somehow indexed differently.
As you mention I might be looking too much into this and that this is merely a problem of increasing the number of samples.
Hi @Viro, this is indeed a very interesting problem.
It could help if you made a minimum working example. This means making an example as simple as possible that shows the same behaviour. Sometimes by creating these examples you will notice an error, and if you don’t then at least you make it much easier for us to help you understand what’s happening here.
Hi again,
I am incredibly confused right now. Per the suggestion of @CatalinaAlbornoz, I tried to run the circuit for a very simple graph, namely a bipartite graph
Using this graph to calculate the max cut, one would anticipate that the probability of measuring the strings 0011 and 1100 would be the highest. These numbers correspond to 3 and 12 respectively, hence on the histogram, these are the numbers that should have an increased probability. The histogram on the other hand give completely different results for both the samples and the probability call as shown here:
I would at least expect the qml.probs() call to align with the expected results of [0011] and [1100], but even they seem misplaced.
This problem obviously cannot be attributed to needing more samples as the number of qubits in the system is merely 4. My understanding of lexicographic ordering might incorrect. I understand the outputs of qml.probs() as the probabilities of measuring the binary equivalent of the index of the vector. In other words, probs[3] would be the equivalent of measuring the bitstring [0011].
I am not sure where to start to address the issue with regards to the qml.probs as I always assumed it would give me the expected behavior. Is there something here I have misunderstood?
Hi @Viro,
To answer half of your question. Your probabilities are as they should be.
When you create your graph, by calling CreateGraphFromList, your nodes are ordered as [0, 2, 3, 1]. When you call the qml.probs(wires = G.nodes), you are changing your basis ordering.
This may be the root of the problem with the samples part of your code.
Wow, complete oversight by me! That you so very much for the help. changing the wires from G.nodes to range(len(G.nodes)) indeed seems to solve the issue, at least for the toy-problem that I presented.