Use graphs' covariance matrix to sample from a GBS

Hey! I am trying to sample from a graph’s covariance matrix from a GBS. However I keep getting that it is not a classical covariance matrix as output. Here is my code:-

A = nx.adjacency_matrix(gnp_random_graph(10,0.5)).todense()
_, s, _ = np.linalg.svd(A, full_matrices=True)

c = 1 / ( np.max(s) + 1e-8 )



Ab = c * np.block([

[A, np.zeros(A.shape)],

[np.zeros(A.shape), A]
])


X = np.block([

[np.zeros(A.shape), np.identity(A.shape[0])],

[np.identity(A.shape[0]), np.zeros(A.shape)]
])

I = np.identity(Ab.shape[0])

Q = np.linalg.inv(I - X @ Ab)

Q=Q.round(decimals=3)

cov=Q-I/2
print(cov)
thewalrus.samples.hafnian_sample_classical_state(cov,samples=1)

I have tried all sampling algorithms and none work, could you look into this?

Hey @MUSHKAN_SUREKA! Digging into the source code for thewalrus.samples.hafnian_sample_classical_state(), the function thewalrus.quantum.is_classical_cov() gets called and returns False from a call to thewalrus.quantum.is_valid_cov(). Looking into the various reasons why thewalrus.quantum.is_valid_cov() could return False, it appears that your matrix cov violates the the uncertainty relation!

I copy-pasted the source code from thewalrus.quantum.is_valid_cov() to make sure that this is indeed the problem:

import networkx as nx
import thewalrus
import numpy as np

from thewalrus.quantum import is_classical_cov, is_valid_cov
from thewalrus.symplectic import sympmat

A = nx.adjacency_matrix(nx.gnp_random_graph(10, 0.5)).todense()
_, s, _ = np.linalg.svd(A, full_matrices=True)

c = 1 / (np.max(s) + 1e-8)
Ab = c * np.block([[A, np.zeros(A.shape)], [np.zeros(A.shape), A]])

X = np.block(
    [
        [np.zeros(A.shape), np.identity(A.shape[0])],
        [np.identity(A.shape[0]), np.zeros(A.shape)],
    ]
)

I = np.identity(Ab.shape[0])
Q = np.linalg.inv(I - X @ Ab)
Q = Q.round(decimals=3)

cov = Q - I / 2

print("Is symmetric:", np.allclose(cov, cov.T))
print("Is positive semi-definite:", np.diag(cov) > 0)

print("Is valid cov:", is_valid_cov(cov))
print(cov.shape)
print(np.allclose(cov, np.transpose(cov)))
print(cov.shape[0] % 2 == 0)

atol = 1e-8
hbar = 1
nmodes = cov.shape[0] // 2
vals = np.linalg.eigvalsh(cov + 0.5j * hbar * sympmat(nmodes))
vals[np.abs(vals) < atol] = 0.0
print(vals)
print(np.all(vals >= 0))

#print(cov)
thewalrus.samples.hafnian_sample_classical_state(cov, samples=1)

'''
Is symmetric: True
Is positive semi-definite: [ True  True  True  True  True  True  True  True  True  True  True  True
  True  True  True  True  True  True  True  True]
Is valid cov: False
(20, 20)
True
True
[-1.78420612e-03 -8.48582083e-04 -2.43155261e-04 -3.08753084e-05
  7.21663655e-05  7.33224994e-05  3.90466990e-04  8.26118343e-04
  1.28677162e-03  1.68127135e-03  9.99807844e-01  1.02512792e+00
  1.05560478e+00  1.06577701e+00  1.10169522e+00  1.29953272e+00
  1.34864552e+00  1.52175701e+00  1.77470483e+00  4.94152944e+08]
False
'''

The source code that I’m referring to can be found here: thewalrus.quantum.gaussian_checks — The Walrus 0.21.0-dev documentation

Are you able to see where your code might be violating the uncertainty relation? Hope this helps!

Hey Isaac!

I actually coded the covariance matrix wrt the Xanadu paper (https://arxiv.org/pdf/1905.12646.pdf) equation.2. I do not think there is an error in the code since I have used it for other purposes. Could you please look into it?

Ah! I see the issue. Two things:

  1. It looks like the convention in Maria’s paper is to subtract \mathbb{I}/2 from the covariance matrix Q. In thewalrus, this is baked-in! So, we just need to set cov = Q rather than cov = Q - 0.5 * I.

  2. Covariance matrices need to be scaled by hbar. Typically, hbar = 1 is the convention. But, in our photonics libraries — mrmustard and thewalrus — the default value of hbar is set to 2 (usually hbar appears with a factor of 1/2, so hbar / 2 = 1 :grin:). So you also need to scale cov: cov = hbar*Q. Having said that, if you’re like me and prefer working with hbar = 1, then just make sure you specify that whenever you call a thewalrus function.

Your code works now :raised_hands: :raised_hands:

import networkx as nx
import thewalrus
import numpy as np

hbar = 1

A = nx.adjacency_matrix(nx.gnp_random_graph(10, 0.5)).todense()
_, s, _ = np.linalg.svd(A, full_matrices=True)

c = 1 / (np.max(s) + 1e-8)
Ab = c * np.block([[A, np.zeros(A.shape)], [np.zeros(A.shape), A]])

X = np.block(
    [
        [np.zeros(A.shape), np.identity(A.shape[0])],
        [np.identity(A.shape[0]), np.zeros(A.shape)],
    ]
)

I = np.identity(Ab.shape[0])
Q = np.linalg.inv(I - np.dot(X, Ab))

cov = hbar*Q
thewalrus.samples.hafnian_sample_classical_state(cov, samples=1, hbar=hbar)

I also removed Q.round(decimals=3) :slight_smile:. Not sure why that’s needed :thinking:.

Hope this helps!

I’m wondering as well if this solution is what’s needed in one of your other posts too!

Thanks a ton for this! Haha I used it because while I printed that, the values were coming way too big to read

1 Like

Glad this helped! Does this help with the other post you made (linked above)? :thinking:

Sadly no but I am glad this is working haha :confused:

1 Like