 # VQLS: accuracy of x

Good morning,

I am trying the VQLS demo and have a question on the output x.

In the demo, the results of Quantum probabilities and Classical probabilities match well. But when I am trying to access x, I found that it is not accurate enough:

Classical result: x = [0.25253814 0.25253814 0.35355339 0.35355339 0.25253814 0.25253814 0.35355339 0.35355339]

Quantum result: x = [0.29177217 0.2907112 0.4064591 0.40583864 0.29186469 0.29102234 0.40754877 0.40554531]

Since the cost function is already very small, I am curious if there is any way to improve the accuracy?

Thank you so much for any suggestions!

Sincerely

Hey @Dobby and welcome to the forum!

I believe the differences in the vectors you provide are due to the normalization. For example, take the classical (unnormalized) result x and find x / np.linalg.norm(x). The result will be much closer to the quantum value you provide.

More generally, the quantum system can only prepare a normalized state, and so the vectors x that we derive from the state’s amplitudes are also normalized. That is why we then normalize the target x vector for a fairer comparison.

1 Like

Hi Tom,

Thank you for the clarification! Still, I am curious can the accuracy of VQLS be controlled someway? If the difference is required to be smaller than some threshold (for example, 10^-6), is keeping minimizing the cost function the only way?

Thank you for any suggestions!

Hey @Dobby!

I have to admit that I’m not deeply familiar with the VQLS algorithm so this may be a bit of a learning process for us both!

Still, I am curious can the accuracy of VQLS be controlled someway?

Yes, indeed this should be the objective of the cost function - to act as a quantitative measure of the accuracy in this problem. From this part, you can see that we are picking as a cost function C_G = 1 - \vert\langle b | \Psi \rangle\vert^{2}. Minimizing this means that we are maximizing the overlap between |b\rangle and |\Psi\rangle.

If the difference is required to be smaller than some threshold (for example, 10^-6), is keeping minimizing the cost function the only way?

Yes, if we have set up the cost function in a way that captures the accuracy, then minimizing it further should help increase the accuracy. There a couple of options to help things along as the cost function becomes very small, for example we could use a more advanced optimizer that has a variable step size.

Two questions I have for you are:

• What measure of accuracy are you interested in? The cost function is an overlap/fidelity-based measure of accuracy.
• For me, the optimization is already doing quite well with a cost of order 10^{-6} after 30 iterations. What target accuracy are you thinking of aiming for?

Cheers,
Tom

Hi Tom,

For my problem, I may wish that the relative error of x could be smaller than 1e-8. Anyhow, I achieved this target now.

Thank you again for all your suggestions!

Dobby

1 Like

Hi @Dobby I have a small question for you, did you use this on some real-world problem. I want to know how did you manage to create b vector without normalizing it. I am struggling there and also would like to know about your implementation, if you can share some insights.

Hi @sajal. Welcome to the Forum!

I see that you asked several questions related to VQLS. Were all of them answered in this thread?

Or do you have an additional question related to this thread and this other one?