Using Takagi decomposition

I really wanted to delve deeper into why the mean photon number of embedding a graph requires a very high mean photon number. So I calculated the squeezing parameter of embedding a graph in a GBS and found something quite strange. The squeezing parameters are good except for the first mode having a very high squeezing parameter. Here is the code.

A = nx.adjacency_matrix(gnp_random_graph(8,0.5)).todense()  #generate adjacency matrix of a graph
_, s, _ = np.linalg.svd(A, full_matrices=True)

c = 1 / ( np.max(s) + 1e-8 )          #the singular value 

p,_=sf.decompositions.takagi(A, rounding=2)    #decompose that matrix to get the diagonal values to estimate squeezing 
qw= [i * c for i in p]         
qwe=np.arctanh(qw)                        #inverse of tanh for c*the diagonal values
zx= np.sinh(qwe)                          #getting squeezing parameter
zx= np.square(zx)
zx=np.around(zx,3)
print(qwe).    #print squeezing parameters
print(zx).      #print mean photon number per mode

Is there any explanation for this because it is just one mode giving such a high value :confused:

Again apologies for my continuous queries. As an undergrad, I am trying to work on things myself but I just want to trust you experts when it comes to these queries❤️

Hey @MUSHKAN_SUREKA! Do you have the gnp_random_graph function? I want to try replicating your result first :smiley:.

Hello Issac!

You can import it from the networkx package.
!pip install networkx
from networkx import gnp_random_graph

Hello Isaac,

I figured it out, thanks!

Ah okay nice! What was the explanation? :open_mouth:

Hey,

So basically the normalisation constant (c in my code) is 1/max of all singular values. Hence there is always that one term which is 1 (the max one). Now we know that tanh^-1 (x) = 0.5*((1+x)/(1-x)). If your x is close to one, tanh^-1 is going to go to a very high value (tends to infinity). Hence the mean photon number of that particular mode + squeezing parameter (in dB, which came to 90+) becomes unreasonable. However, the paper gives us the liberty to choose our own c which can be between 0 and 1/max of singular value. If you reduce c to like 0.2 or a bit less than that, it would work with our current state-of-the-art systems!

Thankyou however for all the help you guys do❤️

1 Like

Nice! Glad we could help :slight_smile: