VQLS: accuracy of x

Good morning,

I am trying the VQLS demo and have a question on the output x.

In the demo, the results of Quantum probabilities and Classical probabilities match well. But when I am trying to access x, I found that it is not accurate enough:

Classical result: x = [0.25253814 0.25253814 0.35355339 0.35355339 0.25253814 0.25253814 0.35355339 0.35355339]

Quantum result: x = [0.29177217 0.2907112 0.4064591 0.40583864 0.29186469 0.29102234 0.40754877 0.40554531]

Since the cost function is already very small, I am curious if there is any way to improve the accuracy?

Thank you so much for any suggestions!

Sincerely

Hey @Dobby and welcome to the forum!

I believe the differences in the vectors you provide are due to the normalization. For example, take the classical (unnormalized) result x and find x / np.linalg.norm(x). The result will be much closer to the quantum value you provide.

More generally, the quantum system can only prepare a normalized state, and so the vectors x that we derive from the state’s amplitudes are also normalized. That is why we then normalize the target x vector for a fairer comparison.

1 Like

Hi Tom,

Thank you for the clarification! Still, I am curious can the accuracy of VQLS be controlled someway? If the difference is required to be smaller than some threshold (for example, 10^-6), is keeping minimizing the cost function the only way?

Thank you for any suggestions!

Hey @Dobby!

I have to admit that I’m not deeply familiar with the VQLS algorithm so this may be a bit of a learning process for us both!

Still, I am curious can the accuracy of VQLS be controlled someway?

Yes, indeed this should be the objective of the cost function - to act as a quantitative measure of the accuracy in this problem. From this part, you can see that we are picking as a cost function C_G = 1 - \vert\langle b | \Psi \rangle\vert^{2}. Minimizing this means that we are maximizing the overlap between |b\rangle and |\Psi\rangle.

If the difference is required to be smaller than some threshold (for example, 10^-6), is keeping minimizing the cost function the only way?

Yes, if we have set up the cost function in a way that captures the accuracy, then minimizing it further should help increase the accuracy. There a couple of options to help things along as the cost function becomes very small, for example we could use a more advanced optimizer that has a variable step size.

Two questions I have for you are:

  • What measure of accuracy are you interested in? The cost function is an overlap/fidelity-based measure of accuracy.
  • For me, the optimization is already doing quite well with a cost of order 10^{-6} after 30 iterations. What target accuracy are you thinking of aiming for?

Cheers,
Tom

Hi Tom,

For my problem, I may wish that the relative error of x could be smaller than 1e-8. Anyhow, I achieved this target now.

Thank you again for all your suggestions!

Dobby

1 Like

Hi @Dobby I have a small question for you, did you use this on some real-world problem. I want to know how did you manage to create b vector without normalizing it. I am struggling there and also would like to know about your implementation, if you can share some insights.

Hi @sajal. Welcome to the Forum!

I see that you asked several questions related to VQLS. Were all of them answered in this thread?

Or do you have an additional question related to this thread and this other one?

Hi @CatalinaAlbornoz,
At the end when we sample our circuit, this provides the probability for all the states. My question is, how do I find out the actual b value, given a row of A matrix?

  1. How should we normalize the A’s row
  2. How should we denormalize the resulting b value from A * x

Hi @sajal,

If I understand your question correctly, you are wondering how to carry out this algorithm in practice, and struggling to deal with the b vector and the matrix A, right?

When the algorithm assumes it has access to a state |b\rangle, indeed it is required that b is a normalized unit vector, i.e., if we write b=(b_1, b_2, \ldots, b_n), then \|b\|=\sqrt{\sum_{i=1^n}|b_i|^2}=1.

If the linear system Ax=b that you are interested in has a vector b that is not normalized, i.e., if \|b\|\neq 1, then you can simply normalize it yourself with the transformation b'=b/\sqrt{\|b\|}. The solution is then unchanged if you also transform A as A' = A/\sqrt{\|b\|}.

Note that x itself may not be normalized, so we also have to be careful with what we mean when we say that the algorithm prepares the state |x\rangle. In reality, the output state is a normalized version of x and when we measure it, we obtain an output i with probability proportional to |x_i|^2.

This implies that you don’t really need to bother with changing A since the algorithm has the same behaviour regardless of any constant factors.

So for your problem of interest, normalize b, use that to prepare a corresponding valid state |b\rangle, and use the algorithm without changing A to obtain a state proportional to the solution |x\rangle, which can be sampled for instance to compute some target expectation value.

Hope that helps!

I understood the explanation and tried to do the same. The x vector generated from Quantum is significantly off as compared to the classical counterpart.

I wanted to use the generated x vector as the coefficient for a general equation. So when I did that and check the corresponding b_i value, the result was off with a huge difference, can you tell me if this is expected behavior?

Let me just share the idea with which I am trying to use VQLS:
This is a general linear equation \Sigma_{i=1}^{n} a_ix_i = b_i

Usually we have a_i values and for different x_i values we get different b_i . But for my problem, I do not have a_i 's but have x_i and b_i. So I just flipped the situation and tried to solve it such that during optimization a_i became x_i and once x_i are found, I will use them in place of a_i giving me a generalized equation for my problem.

Doing this, I received the x_i values and tried to use the equation to get \hat b_i , but the difference between \vert \hat b_i - b_i \vert is huge, the difference of about 300.

I think I am able to present my case, can you please let me know your thoughts about this and also with this as context let me know if the results I am receiving are in fact not wrong, or is the expected only.

Please let me know if any further clarification is needed.

Hi @sajal,

I think I understand what you’re doing. The equation Ax=b can be written as \sum_{j}A_{ij}x_j = b_i for every i. Your idea is to define a^{(i)}_j=A_{ij} to get \sum_{j}a^{(i)}_j x_j = b_i and then interchange the role of a^{(i)} and x. Is that correct?

This makes sense to me and I don’t think it’s the cause of your problem. Two potential explanations come to mind:

  1. As I tried to explain before, the quantum algorithm doesn’t give you the correct x_i. Instead it gives you values that are proportional to the actual solutions x_i, and the proportionality constant can be very large or very small depending on the actual numerical values in your problem. I suggest that you compute all the \hat{b}_i and then normalize both the estimation \hat{b} and the original values b. If the algorithm is working correctly, these should coincide.

  2. I’m not so familiar with this variational algorithm, but it may be worthwhile to confirm that it can also function with non-square matrices A. In your approach, when you exchange the role of A and x, the matrix is not square, so it’s important to make sure the algorithm also works well in that case. I imagine it does, but good to double-check.

Let me know how it goes!