Hello! I am trying to calculate entropy from classical shadows, and found that the outputs of two methods from qml.math.vn_entropy and ClassicalShadow().entropy are different.
Here’s code:
import pennylane as qml
import numpy as np
wires = 6
dev = qml.device("default.qubit", wires=range(wires), shots=1000)
# Just a simple circuit
@qml.qnode(dev)
def qnode(x):
for i in range(wires):
qml.RY(x[i], wires=i)
for i in range(wires - 1):
qml.CNOT((i, i + 1))
return qml.classical_shadow(wires=range(wires))
x = np.linspace(0.5, 1.5, num=wires)
bitstrings, recipes = qnode(x)
shadow = qml.ClassicalShadow(bitstrings, recipes)
shadow_state = np.mean(shadow.global_snapshots(), axis=0)
# two methods to calculate entropy
a = qml.math.vn_entropy(shadow_state,[0,1,2],base=2)
b = shadow.entropy([0,1,2],alpha=1,base=2)
print(a) # 0.9204690621147354
print(b) # 1.083005830537962
Which method give the correct entropy? And what cause the different?
Thanks in advance.
Increasing the size of subsystem makes the gap even larger. For example, wires=[0,1,2] to 'wires=[0,1,2,3,4]. It seems that shadow.entropy() removes eigenvalues smaller than atol while qml.math.vn_entropy() doesen’t, which may lead to the difference.
The underlying issue seems to be that the reconstructed state is not positive-semidefinite anymore. It seems from playing around that the “negativeness” becomes smaller the more samples we measure:
import pennylane as qml
import numpy as np
wires = 3
dev = qml.device("default.qubit", wires=range(wires), shots=10)
# Just a simple circuit
@qml.qnode(dev)
def qnode(x):
for i in range(wires):
qml.RY(x[i], wires=i)
for i in range(wires - 1):
qml.CNOT((i, i + 1))
return qml.classical_shadow(wires=range(wires))
x = np.linspace(0.5, 1.5, num=wires)
bitstrings, recipes = qnode(x)
shadow = qml.ClassicalShadow(bitstrings, recipes)
shadow_state = np.mean(shadow.global_snapshots(), axis=0)
evs = qml.math.eigvalsh(shadow_state)
>>> evs # for 10 shots
array([-2.10457123, -1.17354678, -0.63163156, -0.10126323, 0.25602132,
0.59934091, 1.33793241, 2.81771816])
>>> evs # for 10000 shots
array([-0.05145093, -0.0271077 , -0.02016276, -0.01040207, 0.01799211,
0.03711536, 0.05711912, 0.99689689])
Now to be completely honest I dont know if this is a general property of reconstructed states that only asymptotically vanishes or if something else goes wrong here. The circuit and way the state is reconstructed dont look particularly odd to me. Maybe you actually know more about this?
Now in terms of what leads to the different results: ClassicalShadow.entropy renormalizes after removing eigenvalues <=0 of the density matrix to ensure sum(eigs)=tr(rho)=1. Whereas qml.math.vn_entropy doesnt.
Hi, I looked a bit deeper into this and I think it makes sense the way it is.
First of all, lets look at the exact density matrix of that circuit. it seems to have all but one non-zero eigenvalue:
import pennylane as qml
import numpy as np
wires = 3
dev = qml.device("default.qubit", wires=range(wires))
# Just a simple circuit
@qml.qnode(dev)
def qnode(x):
for i in range(wires):
qml.RY(x[i], wires=i)
for i in range(wires - 1):
qml.CNOT((i, i + 1))
return qml.state()
x = np.linspace(0.5, 1.5, num=wires)
state = qnode(x)
rho = qml.math.dm_from_state_vector(state)
>>> qml.math.eigvalsh(rho)
array([-4.49907820e-17, -4.71269856e-18, -3.66158056e-19, 5.85556686e-19,
1.53476183e-18, 1.50454864e-17, 1.23882754e-16, 1.00000000e+00])
So now when you try to approximate that state with classical shadows, the eigenvalues will fluctuate around 0, i.e. there will be likely some negative ones.
Now what is the best way to treat that?
I dont think there is 100% correct answer, but one somewhat logical way is to remove zero-contribution eigenvalues and take what is left to compute the entropy. Now the remainder of the spectrum will not sum to 1, so we need to make sure it does by renormalizing it before using it for entropy calculation. This is all what ClassicalShadow.entropy is doing at the moment. If you know a better way to treat that case please let me know, happy to hear suggestions!
In terms of inconsistencies between qml.math.vn_entropy and ClassicalShadow.entropy() it makes sesne that these two functions are not consistent here since they treat different cases. While it is apparently very common to get negative eigenvalues, ClassicalShadow.entropy() is taking care of that automatically. For qml.math.vn_entropy() you can set the check_state=True keyword and it will check the input state and tell you that the density matrix you provided is negative.
I found a better way to do this: first project the approximate state from the shadow tomography to the closest valid density matrix (i.e. with all-positive eigenvalues). This is described here: [1106.5458] Maximum Likelihood, Minimum Effort