Hi.

In the tutorial Graph Similarity it is stated “Each set of samples is generated by encoding the graph into a GBS device, and collecting photon click events.” How do I do that for default sets of graphs?

Thank you,

Best

Hi.

In the tutorial Graph Similarity it is stated “Each set of samples is generated by encoding the graph into a GBS device, and collecting photon click events.” How do I do that for default sets of graphs?

Thank you,

Best

Hi @_risto,

Thanks for the question.

Could you explain a bit more what you mean by “default sets of graphs”?

There are some pre-generated examples applicable to similarity for the Mutag dataset included within Strawberry Fields. Is this what you mean by “default” (i.e., the built-in ones), or do you have something else in mind?

Depending on what you’re after, it may be as simple as just using feature_vector_orbits, which works for an arbitrary input graph, or to first generate samples using sample and then convert those samples into a feature vector with feature_vector_orbits_sampling.

Thank you for your answer. Will first try what you suggested and will get back to you

I want to use quantum computer you provide on cloud to calculate similarities between graphs of my own choice (not the MUTAG, but any graphs I choose).

1.) What is the difference between the two methods you mentioned?

2.) Which one should I use, if I want to run the algorithm on actual quantum device you provide - and can I use this method on my own graphs?

3.) I use https://strawberryfields.ai/photonics/demos/tutorial_X8_demos.html only after I have obtained feature vectors in https://strawberryfields.ai/photonics/apps/run_tutorial_similarity.html or how do I combine those two tutorials?

Hey @_risto!

1.) What is the difference between the two methods you mentioned?

The `feature_vector_orbits`

calculates the orbit probabilities using a brute-force classical simulation of GBS, while the `feature_vector_orbits_sampling`

infers the probabilities from input samples (which can be generated on simulator or hardware). Check out the demo for more of an understanding.

2.) Which one should I use, if I want to run the algorithm on actual quantum device you provide - and can I use this method on my own graphs?

3.) I use https://strawberryfields.ai/photonics/demos/tutorial_X8_demos.html only after I have obtained feature vectors in https://strawberryfields.ai/photonics/apps/run_tutorial_similarity.html or how do I combine those two tutorials?

`feature_vector_orbits_sampling`

should be the approach you choose for hardware compatibility. In this approach you should first generate your samples using GBS, and then pass them to `feature_vector_orbits_sampling`

. Hence, you should follow the code in the X8 tutorial to generate samples, and then pass the result through `feature_vector_orbits_sampling`

.

However, while the above approach can work in principle for arbitrary graphs of your choice, the main restriction is due to the current hardware. As you can see here, the range of graphs supported on the present X8 chips is quite limited, i.e., bipartite graphs with a special type of spectrum. If your own graphs satisfy this criteria, then you should be able to use the hardware. Otherwise, it may be best to use the simulator for now.

1 Like

Hi @Tom_Bromley

What do you mean by spectrum? The values of B which it can take?

Just to double check if I understand right, we can achieve graph classification via 2 different methods of how we acquire feature vectors.

Hi @_risto,

What do you mean by spectrum?

The spectrum of a graph is given by the eigenvalues of its adjacency matrix. For a graph to be compatible with the X8 device, we require:

- The graph must be bipartite.
- The eigenvalues of its adjacency matrix must be in the set \{0, d\} for some real value d.

You can check out two example graphs here.

Just to double check if I understand right, we can achieve graph classification via 2 different methods of how we acquire feature vectors.

Right, you can calculate the same feature vector using two different methods. One method (`feature_vector_orbits_sampling`

) is for if you have pre-generated samples (e.g. after running on hardware), while the other method (`feature_vector_orbits`

) is for if you want to brute-force calculate using a simulator.

Hi @Tom_Bromley

Thank you for clarification. I tried to run parts of the https://strawberryfields.readthedocs.io/en/stable/_modules/strawberryfields/apps/similarity.html#feature_vector_orbits but get SyntaxError: unexpected EOF while parsing. Is this the right code to look at?

Hi @_risto,

If you’re getting `SyntaxError: unexpected EOF while parsing`

, it usually means the Python snippet is incomplete. A typical cause of this could be a copy/paste error where some text was missed (e.g., a `return`

statement on a function).

Would you be able to provide us with a self-contained minimal code example that you’re trying to run that provides that error? Thanks

Does that include only complete bipartite graphs (as mentioned in https://strawberryfields.ai/photonics/demos/tutorial_X8.html#embedding-bipartite-graphs)?

Is there any way where I can generate those type of graphs from other type of data (either graphs or images)?

And if using brute force to calculate feacture vectors via `feature_vector_orbits`

in this case any type of graphs can be used?

Hi @_risto!

Does that include only complete bipartite graphs (as mentioned in https://strawberryfields.ai/photonics/demos/tutorial_X8.html#embedding-bipartite-graphs)?

Complete bipartite graphs satisfy these criteria but are not the only instances. You can see in Fig. 8 of this paper that a graph with just pairwise connections also works. Nevertheless, the constraints are quite strong and there isn’t a lot of flexibility in terms of graph connectivity. You do have some flexibility on the edge weights, as can be seen by the adjacency matrices in page 21 of the paper. These can be generated using Eqs. (15) and (16) by picking a unitary U and squeezing r that is compatible with the device.

Is there any way where I can generate those type of graphs from other type of data (either graphs or images)?

Given the discussion above, it’ll be a challenge to encode more general graphs into the present device in a way that will produce informative output data.

And if using brute force to calculate feacture vectors via

`feature_vector_orbits`

in this case any type of graphs can be used?

Yes, in this case any type of graph can be used.