About Graph similarity paper

Hello everyone,

The two questions that I would want to ask you from your paper, “Measuring the similarity of graphs with a Gaussian boson sampler” are as follows:-

  1. The probability of measuring a given photon-counting event given in the paper (Page 2 and equation 3) gives results greater than 1 when I do a manual calculation for many cases, for example, if I have a 9 node graph and want to detect one photon in the first and second mode of the GBS, Hafnian results in 1 and n! is 1 again, the det(Q) is coming less than 1 and anything divided by something less than 1 results in greater than 1. However, I did use the code on Xanadu for graph similarity and that did give a probability of less than 1. I am sure I might be missing something. I hope you could clear that up.
  2. I do not understand constructing a similarity measure or kernel that computes the similarity between two graphs G and G’. Is it a dot product between the two feature vectors? I hope you could elaborate on the use of linear and radial basis kernels since the Xanadu code documentation does not show how the similarity was measured in the Python programming language. Also if you could elaborate on the term “similarity” . I am unsure as to what similarity means in this context.

For your convenience, I have attached the summary of the question I am talking about in this message.
In the document, I mistakenly added n! inside the square root, it would be outside and I have taken n_100 in the introduction section but it would be n_m. Apologies for this error, however, the examples are using the right formula.
Looking forward to your reply.

n! is not under the square root.
Also, note that the A tilde matrix is rescaled so that it has max singular value of 1 (or slightly below 1).

Let us know if that fixes your issues!

Thank you for your help but if the A tilde matrix has a max singular value of 1 (or slightly below) then the det|Q| will always have a value less than 1 since the A tilde has 1 entry, X and I also have 0’s and 1’s as singular entry. Hence the probability is coming more than 1.

Also, I understand that n! is outside the square root, it is a printing mistake from my side but I am anyway using a sample where I need one photon detection at each mode so n! effectively comes to 1.

now I’m confused too. the hafnian function from the walrus evaluates to zero on that graph :face_with_monocle:

Hey!
So you are getting hafnian of the graph as 0 because you might be taking the adjacency matrix of the whole graph. According to the paper, “Equation (3) does not depend on the Hafnian of the adjacency matrix A, but on a matrix An. An contains nj duplicates of the jth row and column in A. If nj = 0, the jth row/column in A does not appear in An.”
This matrix would result in hafnian as 1.

1 Like

Thanks for this clarification @MUSHKAN_SUREKA!

Has your question been resolved? Or is it still standing?

Hello,

Unfortunately, it is still standing :confused:

Oh, I’ve asked another colleague to take a look at your question. It’s not an easy question though!

Haha yeah! It has been bugging me for a while and I really hope it gets resolved. :slight_smile:

Any idea when can I hear back from your colleague?

Hi @MUSHKAN_SUREKA! Happy to help here. I’ll get back to you in short.

1 Like

Let me give a shot at your question:

Probabilities of photon counting events

On the examples listed on the document it seems that you’re missing the normalization constant when calculating \bar A. From the paper you cited:

the rescaling constant c is chosen so that 0 < c < 1/s_{max}, and s_{max} is the maximum singular value of
A.

Precisely, the rescaling constant is chosen to be 1/(s_{max}+10^{-8})

The following code calculates the prefactor 1/\sqrt{\det Q} accordingly for both examples on the attached file

import numpy as np

# Example 1
    # A = np.array([[0,1,1],
    #              [1,0,1],
    #              [1,1,0]])

# Example 2
A = np.array([[0,0,1,0,1],
             [0,0,0,1,1],
             [1,0,0,1,1],
             [0,1,1,0,0],
             [1,1,1,0,0]])


# calculate max singular value and rescaling constant
_, s, _ = np.linalg.svd(A, full_matrices=True)
c = 1 / ( np.max(s) + 1e-8 )
print(f"Max singular value: {np.max(s)}")
print(f"Rescaling constant value: {c:.2E}\n")

# calculate Q and its determinant
Ab = c * np.block([
    [A, np.zeros(A.shape)],
    [np.zeros(A.shape), A]
])

X = np.block([
    [np.zeros(A.shape), np.identity(A.shape[0])],
    [np.identity(A.shape[0]), np.zeros(A.shape)]
])

I = np.identity(Ab.shape[0])
Q = np.linalg.inv(I - X @ Ab)
detQ = np.linalg.det(Q)

print(f"det(Q) = {detQ:.2E}")
print(f"1/sqrt(det(Q)) = {1/np.sqrt(detQ):.2E}")

For example 1

Max singular value: 2.0
Rescaling constant value: 5.00E-01

det(Q) = 1.78E+08
1/sqrt(det(Q)) = 7.50E-05

and example 2

Max singular value: 2.481194304092015
Rescaling constant value: 4.03E-01

det(Q) = 4.94E+08
1/sqrt(det(Q)) = 4.50E-05

As seen in the results the prefactors are below one. In particular for example 2, the probability of observing the sample [1,1,1,1,0] is p=4.5\times 10^{-5} as the Hafnian equals one as well as \boldsymbol n!.

Similarity measures

There are many different notions of similarity and they depend on which property of the graph is exploited to compare one graph to another.

The idea of the paper is to use the properties of the samples from the GBS machine to assign to each graph a vector in \mathbb{R}^N. This vectors are called feature vectors because they encode the properties of each graph in a familiar space: one in which we can define a measure of “closeness” or “similarity”.

The idea is the following, in \mathbb{R}^N we say two vectors \boldsymbol f and \boldsymbol f' are similar if the dot product
\langle \boldsymbol f, \boldsymbol f'\rangle between them is zero, or close to each other (pointing to a similar direction) if close to zero. If the encoding proposed is capable of generating a vector \boldsymbol f in \mathbb{R}^N for each graph G then, by association, we can say that two graphs are similar if the vectors that represent them are close to each other. Since we already know that the dot product is a good measure of similarity, then we can use that measure to compare different graphs when represented as vectors:

\kappa(G, G') = \langle \boldsymbol f, \boldsymbol f'\rangle.

The above means that we can express the similarity of two graphs as the inner product of two vectors — thanks to the feature map.

However, the kernel defined above is linear and might not be able to learn non-linear decision boundaries on classification tasks. In that case one can define a different kernel, like the radial basis function kernel \kappa_{rbf} mentioned in the paper.


Hope this helps to solve your question! :nerd_face:

1 Like

Ah okay! Got it! This was really helpful. Thankyou!

1 Like

Hello! Sorry to disturb you again but according to the paper, there is a formula given for the covariance matrix. Now I am trying to calculate the mean-photon number at the output state of the GBS using the trace of the covariance matrix but I am getting a huge number : 248119457.09197953

This is classically intractable and no GBS can accommodate that many photons. Could you please look into this?

-Thanks!

Hi @MUSHKAN_SUREKA,
Could you please share the code that you’re using? I would like to see if I can replicate your problem.

Thanks!

Sure! here you go

A = np.array([[0,0,1,0,1],

         [0,0,0,1,1],

         [1,0,0,1,1],

         [0,1,1,0,0],

         [1,1,1,0,0]])

calculate max singular value and rescaling constant

_, s, _ = np.linalg.svd(A1, full_matrices=True)

c = 1 / ( np.max(s) + 1e-8 )

print(f"Max singular value: {np.max(s)}")

print(f"Rescaling constant value: {c:.2E}\n")

calculate Q and its determinant

Ab = c * np.block([

[A1, np.zeros(A1.shape)],

[np.zeros(A1.shape), A1]

])

X = np.block([

[np.zeros(A1.shape), np.identity(A1.shape[0])],

[np.identity(A1.shape[0]), np.zeros(A1.shape)]

])

I = np.identity(Ab.shape[0])

Q = np.linalg.inv(I - X @ Ab)

V= Q-(I/2)

detQ = np.linalg.det(Q)

n = np.trace(V)/2 - 0.5
#print(V)

#print(f"det(Q) = {detQ:.2E}")

#print(f"1/sqrt(det(Q)) = {1/np.sqrt(detQ):.2E}")
print(n)

Thanks @MUSHKAN_SUREKA. I’m guessing A and A1 the same.

I can replicate your result but I don’t know if the math is right.

As Nathan mentioned on Slack it’s probably best to experiment with different values of the mean photon number.

Yep A and A1 are the same, apologies for the confusion.
I see, could you please ask any of your colleagues about the relation between the output covariance matrix and the mean photon number? I just wanted to be sure about the formula.

Thanks!

Hi @MUSHKAN_SUREKA, sure let me look around for an answer.

Hi @MUSHKAN_SUREKA , what is exactly the formula that you’re referring to?