# Simultaneous Detections in Gaussian Boson Sampling simulation

When I am using the Gaussian Boson Sampling backend and getting samples from it, I am noticing two patterns from the resulting threshold detector clicks.

The first is that when samples are taken from the GBS simulation, the simultaneous counts first follows a sort of asymptotic pattern, spiking at 2 simultaneous clicks.

Second, given an arbitrary symmetric adjacency matrix (even of a complete graph of 64 nodes), there seems to be an abrupt drop off in simultaneous detections.

Code

``````from strawberryfields.apps import sample, subgraph
import numpy as np
import matplotlib.pyplot as plt
import time

M = 64
A = np.array([[1] * M for i in range(M)])
n_samples = 50 # some number
n_mean = M / 2
# num_modes_detected_count = [[0]*(M+1)]
num_modes_detected = []
samples = sample.sample(A, n_mean, n_samples, threshold=True)
for i in range(len(samples)):
modes_detected = list(samples[i]).count(1)
num_modes_detected.append(modes_detected)
figure = plt.figure()
a.hist(num_modes_detected, edgecolor='black', bins=M)
``````

Hi @sar49 ,

Unfortunately I’m not understanding it well. Can you please rephrase it? Is the question why there’s an abrupt drop off in simultaneous detections? Or is it something else?

My two questions are

1. Why is there only an immense spike at 2 simultaneous clicks and
2. There seems to be an abrupt drop off at 30 simultaneous detections (when there are more than double that amount of available detectors to be clicked)

Hi @sar49!

My colleague Eli has kindly helped with the answer below:

Sampling photons from a Gaussian state will generally follow a decaying pattern for the number of detection events just because the states are energy-limited and hence generally follow an exponentially decaying distribution with respect to higher photon number. If the energy is low enough, then I would also expect the leading contributions to be no clicks, then 2 clicks (because all the input states have even parity and there is no loss so photons are always going to come in pairs).
You’re also not taking that many samples (50? 100?), so it’s hard to comment on specific peaks/drop-offs since there’s probably a lot of statistical noise.

I hope this helps!

I think that my main issue is that in GBS experiments such as the one done by USTC in 2021 the simultaneous detector count graph demonstrates a bell curve pattern. In other simulations as well, this bell curve pattern can be seen but strawberry fields does not show this.

Ah I think the issue is in what you’re comparing. Maybe this demo can give you more insights into this topic. Please let me know if it helps!

Thank you for this demo, but I am not sure it clarified much. My issue is that after taking many samples, the simultaneous clicks that we measure is not a bell curve. Instead it is some weird decaying function whereas in the paper it shows a clear bell curve.

Do you know how I would be able to achieve this?

Hi @sar49 ,

Here’s an answer from my colleague Rachel.

The click distribution will depend on the parameters of the GBS. You are simulating a GBS with parameters defined by the A matrix being the all-ones matrix. This corresponds to a GBS with squeezing only in the first input mode. You see this by adding this to their code:

``````from strawberryfields.ops import GraphEmbed

gbs = GraphEmbed(A, 0.5)
print(gbs.sq) # the squeezing values in each mode
``````

It seems reasonable that one squeezer with many modes will follow approximately the same distribution as a squeezer. Experiments typically have (equal) squeezing in all input modes which makes it more interesting. If you want to sample from something with known parameters you can either find the A matrix from the Gaussian state covariance matrix or maybe it’s easier to use TheWalrus to sample using the covariance matrix and vector of means directly using the torontonian_sample_state function.

I hope this helps clarify why you get this difference!