Trace distance and fidelity

I’m trying to output two cost function values (trace distance and fidelity) between a simple bell state and some parametrized quantum state. Since both states are pure states, trace distance should equal to sqrt(1-fidelity), where fidelity is just squared of an overlap between the two states. But my codes output different values for trace distance vs fidelity. Does anybody have an idea what am I missing here?

Hi,

The problem with your implementation is that the trace distance isn’t implemented correctly, I believe this has to do with numpy’s definition of norm being element wise as opposed to the required trace-norm. Here is an alternate construction which requires SciPy’s sqrtm method which computes the matrix square root.

You can learn more about the trace norm here.

This is what the results look like after running this:

There is a very small (~10^{-9}) imaginary component which probably comes from the square root function implementation since it is probably an approximate solver.

Let us know if you have any other questions !
Cheers,

2 Likes

@Jay_Soni
Hi Jay, thanks for your response.

Now, I’m trying to train the quantum circuit via trace distance, and apparently it is not getting trained, whereas it is trained very well via fidelity. What do you think the problem is here?

Actually, I just realized that tf.linalg.adjoint isn’t working properly.

Hi,

Yes, something seems to be wrong as you have a complex cost value!

Hope you figure it out, let us know if you have any other questions!

@Jay_Soni
Hi Jay,

I’m pretty sure this is a proper cost function for trace distance, but it keeps not getting trained. Do you have any idea on this issue? The main problem is that the cost function isn’t decreasing.

I used this definition (which also gives us proper trace distance) to train, and it seems like the cost function decreases and trains well. But the training stops with an error massage: Self-adjoint eigen decomposition was not successful. Any idea? Sorry for asking lots of questions.

Hi @Leeseok_Kim, I have seen that this error happens when your batch size is 1. I’m not sure how to change this in your code but maybe this will help you find a way to fix the error.

Let me know if this helps!

1 Like

Thanks Catalina!
Actually, I solved this problem by using other tf function, which is tf.linalg.svd. Thanks a lot tho !

1 Like

I’m glad you could find a solution @Leeseok_Kim! And thank you for posting it here. It will probably be very useful to many other PennyLaners.

Enjoy using PennyLane and don’t hesitate to come back to the forum if you have other questions!

1 Like