Qml.math.relative_entropy returns unexpected results

My demo is to put pure states generated by single qubit rotation gates into qml.math.relative_entropy() function. According to the official script qml.math.relative_entropy — PennyLane 0.38.0 documentation, for two different pure states, the returned value should be inf. However, my demo returns me either a value or nan.

Demo is here:

import tensorflow as tf
import pennylane as qml
import numpy as np

dev = qml.device("default.qubit", wires=1)
@qml.qnode(dev)
def circuit0(theta1,theta2):
    qml.RX(theta1, 0)
    qml.RZ(theta2, 0)
    return qml.density_matrix([0])

theta1 = np.random.uniform(0,np.pi,size=(1))
theta2 = np.random.uniform(0,2*np.pi,size=(1))
data = circuit0(theta1,theta2)
data = tf.reshape(data,[1,2,2])

for i in range(9):
    theta1 = np.random.uniform(0,np.pi,size=(1))
    theta2 = np.random.uniform(0,2*np.pi,size=(1))
    new = circuit0(theta1,theta2)
    new = tf.reshape(new,[1,2,2])
    data = tf.concat([data,new],0)

for i in range(10):
    for j in range(10):
        print(i,j,qml.math.vn_entropy(data[i,:,:], indices=[0]),qml.math.vn_entropy(data[j,:,:], indices=[0]),qml.math.relative_entropy(data[i,:,:],data[j,:,:],check_state=True))

This is the output. As you can see the returned vn_entropy results are correct but the relative_entropy results are wrong (either nan or a number, and I deleted unneccessary results to make it easier to read)

1 0 tf.Tensor(4.5408119547162706e-15, shape=(), dtype=float64) tf.Tensor(1.1102230246251564e-16, shape=(), dtype=float64) tf.Tensor(nan, shape=(), dtype=float64)
1 1 tf.Tensor(4.5408119547162706e-15, shape=(), dtype=float64) tf.Tensor(4.5408119547162706e-15, shape=(), dtype=float64) tf.Tensor(-2.3665827156630354e-30, shape=(), dtype=float64)
1 2 tf.Tensor(4.5408119547162706e-15, shape=(), dtype=float64) tf.Tensor(8.521795631430148e-15, shape=(), dtype=float64) tf.Tensor(27.09213385576561, shape=(), dtype=float64)
1 3 tf.Tensor(4.5408119547162706e-15, shape=(), dtype=float64) tf.Tensor(-0.0, shape=(), dtype=float64) tf.Tensor(nan, shape=(), dtype=float64)
1 4 tf.Tensor(4.5408119547162706e-15, shape=(), dtype=float64) tf.Tensor(1.5581232509178702e-15, shape=(), dtype=float64) tf.Tensor(33.24765625394799, shape=(), dtype=float64)
1 5 tf.Tensor(4.5408119547162706e-15, shape=(), dtype=float64) tf.Tensor(2.2204460492503126e-16, shape=(), dtype=float64) tf.Tensor(nan, shape=(), dtype=float64)
1 6 tf.Tensor(4.5408119547162706e-15, shape=(), dtype=float64) tf.Tensor(-0.0, shape=(), dtype=float64) tf.Tensor(nan, shape=(), dtype=float64)
1 7 tf.Tensor(4.5408119547162706e-15, shape=(), dtype=float64) tf.Tensor(-2.220446049250313e-16, shape=(), dtype=float64) tf.Tensor(nan, shape=(), dtype=float64)
1 8 tf.Tensor(4.5408119547162706e-15, shape=(), dtype=float64) tf.Tensor(-2.220446049250313e-16, shape=(), dtype=float64) tf.Tensor(nan, shape=(), dtype=float64)
1 9 tf.Tensor(4.5408119547162706e-15, shape=(), dtype=float64) tf.Tensor(-0.0, shape=(), dtype=float64) tf.Tensor(nan, shape=(), dtype=float64)

qml.about() outputs are

Name: PennyLane
Version: 0.37.0
Summary: PennyLane is a cross-platform Python library for quantum computing, quantum machine learning, and quantum chemistry. Train a quantum computer the same way as a neural network.
Home-page: https://github.com/PennyLaneAI/pennylane
Author:
Author-email:
License: Apache License 2.0
Location: /root/miniconda3/envs/tc/lib/python3.9/site-packages
Requires: appdirs, autograd, autoray, cachetools, networkx, numpy, packaging, pennylane-lightning, requests, rustworkx, scipy, semantic-version, toml, typing-extensions
Required-by: PennyLane_Lightning

Platform info:           Linux-5.15.153.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
Python version:          3.9.18
Numpy version:           1.24.3
Scipy version:           1.13.1
Installed devices:
- default.clifford (PennyLane-0.37.0)
- default.gaussian (PennyLane-0.37.0)
- default.mixed (PennyLane-0.37.0)
- default.qubit (PennyLane-0.37.0)
- default.qubit.autograd (PennyLane-0.37.0)
- default.qubit.jax (PennyLane-0.37.0)
- default.qubit.legacy (PennyLane-0.37.0)
- default.qubit.tf (PennyLane-0.37.0)
- default.qubit.torch (PennyLane-0.37.0)
- default.qutrit (PennyLane-0.37.0)
- default.qutrit.mixed (PennyLane-0.37.0)
- default.tensor (PennyLane-0.37.0)
- null.qubit (PennyLane-0.37.0)
- lightning.qubit (PennyLane-Lightning-0.37.0)
None

After looking into the source code, I find out the problem located at the ‘_compute_relative_entropy(rho, sigma, base=None)’ method in quantum.py.
The ‘qml.math.linalg.eigh()’ function returns eigenvalues either slightly bigger than zero or smaller than zero.
When the eigenvalue is smaller than zero, then after putting it into the np.log it returns nan.
When the eigenvalue is bigger than zero, then the results of np.log is a value instead of inf as it should be when the input is zero.
Hope my findings can provide some insights into this problem.

Hi @Bear2s ,

Thank you for posting this and adding your findings! I can replicate this with PennyLane v0.38.1

I think the problem might be more complicated than that. I’ll discuss this with the team to figure out what the next step should be, probably we’ll need a bug report so I’ll keep you updated.

So far this is what I’ve found:

I sometimes randomly get this warning:

/usr/local/lib/python3.10/dist-packages/pennylane/math/single_dispatch.py:99: RuntimeWarning: invalid value encountered in log
  ar.register_function("numpy", "entr", lambda x: -np.sum(x * np.log(x), axis=-1))

If I change the order in which I add the density matices I sometimes get this warning:

/usr/local/lib/python3.10/dist-packages/autoray/autoray.py:81: RuntimeWarning: invalid value encountered in log
  return func(*args, **kwargs)

And I sometimes don’t get any warning at all.

Below is the code to replicate my results.

# Run a circuit and return a density matrix
dev = qml.device("default.qubit", wires=1)
@qml.qnode(dev)
def circuit0(theta1,theta2):
    qml.RX(theta1, 0)
    qml.RZ(theta2, 0)
    return qml.density_matrix([0])

dm0 = np.array(circuit0(1.3,1.4))
print('dm0: \n',dm0)

# We modify dm0 to make the tests below

# Create another density matrix to calculate the relative entropy
dm1 = np.array([[0.3, 0], [0, 0.7]])

# dm2=dm0
dm2 = np.array([[0.63374941, 0.47476908+0.08188662j], [0.47476908-0.08188662j, 0.36625059]])
print('Result with 0.08188662j: ',qml.math.relative_entropy(dm1,dm2,check_state=True))

#dm3 ~ dm0 (slightly changed the complex values)
dm3 = np.array([[0.63374941, 0.47476908+0.081886604j], [0.47476908-0.081886604j, 0.36625059]])
print('Result with 0.081886604j: ',qml.math.relative_entropy(dm1,dm3,check_state=True))

#dm4 ~ dm3 (slightly changed the complex values)3
dm4 = np.array([[0.63374941, 0.47476908+0.081886605j], [0.47476908-0.081886605j, 0.36625059]])
print('Result with 0.081886605j (returns autoray warning): ',qml.math.relative_entropy(dm1,dm4,check_state=True))

# We change the order in which we add the two matices to relative_entropy to show that we get a different warning
print('Result with 0.081886605j (returns single_dispatch warning): ',qml.math.relative_entropy(dm4,dm1,check_state=True))

When I run the code above this is what I get

dm0: 
 [[0.63374941+0.j         0.47476908+0.08188662j]
 [0.47476908-0.08188662j 0.36625059+0.j        ]]
Result with 0.08188662j:  nan
Result with 0.081886604j:  12.217725194379632
Result with 0.081886605j (returns autoray warning):  nan
Result with 0.081886605j (returns single_dispatch warning):  nan
/usr/local/lib/python3.10/dist-packages/autoray/autoray.py:81: RuntimeWarning: invalid value encountered in log
  return func(*args, **kwargs)
/usr/local/lib/python3.10/dist-packages/pennylane/math/single_dispatch.py:99: RuntimeWarning: invalid value encountered in log
  ar.register_function("numpy", "entr", lambda x: -np.sum(x * np.log(x), axis=-1))

My hypothesis is that there’s some floating point error or rounding somewhere causing trouble, but it could be something else entirely!

Hi @Bear2s ,

It looks like this might be a consequence of numerical instability near divergences for relative entropy.

The warnings can be a bit confusing though so we’ll look into adding a note about this in the documentation.

I tried the following code and I do get inf even if one of the eigenvalues of the matrices is zero. So we see the case of having both the warnings and the correct result. This seems to support the theory that the issue is related more to numerical instability than to an actual bug in the code.

import pennylane as qml
import numpy as np

# Create a density matrix with one eigenvalue being zero, and print out its eigenvalues
dm0 = np.array([[0, 0], [0, 1]])
print('dm0 eigvals: ',np.linalg.eig(dm0)[0])

# Create another density matrix and print out its eigenvalues
dm1 = np.array([[0.3, 0], [0, 0.7]])
print('dm1 eigvals: ',np.linalg.eig(dm1)[0])

# Calculate the relative entropy
print('Result with dm0: ',qml.math.relative_entropy(dm1,dm0,check_state=True))

The code above gives me the results (and warnings) below

dm0 eigvals:  [0. 1.]
dm1 eigvals:  [0.3 0.7]
Result with dm0:  inf
/usr/local/lib/python3.10/dist-packages/autoray/autoray.py:81: RuntimeWarning: divide by zero encountered in log
  return func(*args, **kwargs)
/usr/local/lib/python3.10/dist-packages/pennylane/math/quantum.py:1021: RuntimeWarning: invalid value encountered in multiply
  rel = qml.math.sum(qml.math.where(rel == 0.0, 0.0, np.log(evs_sig) * rel), -1)

Thanks for your test. I think this is casued by numerical instability too. And I wonder if there is a way to avoid the issue? I feel like maybe it is common usage to use the pure-state circuit output as the input of the relative entropy. Returning nan or a finite number is not the expected behavior. I think such numerical instability can be avoided by forcefully set numbers very close to zero to be zero. But this is not a beautifull solution :upside_down_face:

@Bear2s unfortunately there doesn’t seem to be a beautiful solution around. And the issue with rounding things is that it may not solve the issue and it can indeed mess up things for others in different situations. For now a good enhancement will be to add a note in the docs explaining that this extrange behaviour can arise. In the future if you have ideas for another solution let us know! You can also open an enhancement suggestion on GitHub if you prefer.

Thanks for making us aware of this and I hope this post can also help other who have similar issues.

Thanks for your reply. I will share my thoughts if I come up with some ideas about the issue :nerd_face:

1 Like