Discrepancy between state.fock_prob() and the probability calcuated using MeasureFock() when simulating Scattershot Boson Sampling

Hello! I am trying to simulate a 4-mode SBS using the following code:

# Setting up the experiment - 4 mode SBS

r = 0.3 # squeezing parameter for S2 gate

scattershot_bs = sf.Program(8)
with scattershot_bs.context as q:
    S2gate(r) | (q[0], q[4])
    S2gate(r) | (q[1], q[5])
    S2gate(r) | (q[2], q[6])
    S2gate(r) | (q[3], q[7])

    # MeasureFock()     |   q[0]
    # MeasureFock()     |   q[1]
    # MeasureFock()     |   q[2]
    # MeasureFock()     |   q[3]

    # introducing loss in the channels
    LossChannel(0.08) |   q[4]
    LossChannel(0.82) |   q[5]
    LossChannel(0.55) |   q[6]
    LossChannel(0.24) |   q[7]

    Interferometer(U) | (q[4], q[5], q[6], q[7])

    # introducing loss in the channels
    LossChannel(0.46) |   q[4]
    LossChannel(0.65) |   q[5]
    LossChannel(0.41) |   q[6]
    LossChannel(0.37) |   q[7]

    # MeasureFock()     |   q[4]
    # MeasureFock()     |   q[5]
    # MeasureFock()     |   q[6]
    # MeasureFock()     |   q[7]

# Initializing the engine
eng = sf.Engine(backend="gaussian")  

# Executing the program with the engine
results = eng.run(scattershot_bs)

Now if I want to get the probability of let’s say [1,0,0,0,1,0,0,0] and I am using results.state.fock_prob([1,0,0,0,1,0,0,0]), then I am getting the probability of 0.00013140826379994975.

Now if I want to calculate the same probability from PNR counts, then I use MeasureFock() (as commented in the code). I generated 100,000 samples and the counts for [1,0,0,0,1,0,0,0] was 111 - which makes a probability of 0.00111.

I cannot understand why there is this difference. Is there some internal normalization that state.fock_prob() is doing? It will be really helpful to get some insight.

Thank you!

Hi @Cheeranjiv_Pandey ,

I used the unitary from our demo on Boson sampling and the permanent, and I get the same probability with both methods.

My guess is that the cutoff dimension you chose is too low for your Unitary, so you may be losing information when you’re using MeasureFock. This is just a guess though, since I can’t replicate your issue. Note that increasing the cutoff dimension will require more computational resources (time, memory) for the simulation.

It may also be something in how you’re calculating the probability with the second method. I used results.state.fock_prob([1,0,0,0,1,0,0,0])

Below is the code to manually generate the Unitary in case you want to test it. I hope it helps!

import numpy as np
from numpy.linalg import multi_dot
from scipy.linalg import block_diag

Uphase = np.diag([np.exp(0.5719*1j),np.exp(-1.9782*1j),np.exp(2.0603*1j),np.exp(0.0644*1j)])


BSargs = [
    (0.7804, 0.8578),
    (0.06406, 0.5165),
    (0.473, 0.1176),
    (0.563, 0.1517),
    (0.1323, 0.9946),
    (0.311, 0.3231),
    (0.4348, 0.0798),
    (0.4368, 0.6157)
]

t_r_amplitudes = [(np.cos(q), np.exp(p*1j)*np.sin(q)) for q,p in BSargs]


BSunitaries = [np.array([[t, -np.conj(r)], [r, t]]) for t,r in t_r_amplitudes]


UBS1 = block_diag(*BSunitaries[0:2])
UBS2 = block_diag([[1]], BSunitaries[2], [[1]])
UBS3 = block_diag(*BSunitaries[3:5])
UBS4 = block_diag([[1]], BSunitaries[5], [[1]])
UBS5 = block_diag(*BSunitaries[6:8])

U = multi_dot([UBS5, UBS4, UBS3, UBS2, UBS1, Uphase])
print(np.round(U, 4))