I am using the X8 to compare the Fock probabilities \vert {}_{0145}\langle 0,1,0,1\vert U_{\text{BS}}({\pi\over 4},\varphi)_{01}U_{\text{BS}}({\pi\over 4},\varphi)_{45}\vert \text{TMSS}_{r=1}\rangle_{04}\vert \text{TMSS}_{r=1}\rangle_{15}\vert^{2} and \vert {}_{0145}\langle 0,1,0,1\vert \vert \text{TMSS}_{r=1}\rangle_{04}\vert \text{TMSS}_{r=1}\rangle_{15}\vert^{2} on the interval \varphi \in [-{\pi\over 2},{\pi\over 2}].

To compare these probabilities, I get the respective empirical estimates Q(\varphi) and P and calculate \vert 1-{Q(\varphi)\over P}\vert. The analytical version of this function and the numerical simulation (with MeasureFock and TensorFlow backend and cutoff=10) are shown below. (Well, I can only put one image in a post, so just believe me that the photon counting simulation matches the analytical result {1\over 2}-{1\over 2}\cos 2\varphi).

On the X8, I’m unable to get the expected value of this function correct (especially near \pm {\pi\over 2} and 0), see below (I’ve just reflected the [-{\pi\over 2},0] data about 0).

In both cases, I estimate the probability of \vert 0,1,0,1\rangle_{0145} by summing all photon counts that agree with 0,1,0,1 on modes 0145.

Code for Q(\varphi):

sf.ops.S2gate(1.0) | (q[0], q[4])

sf.ops.S2gate(1.0) | (q[1], q[5])

sf.ops.BSgate(np.pi/4, phi) | (q[0], q[1])

sf.ops.BSgate(np.pi/4, phi) | (q[4], q[5])

sf.ops.MeasureFock() | q

eng = sf.RemoteEngine(“X8”)

return eng.run(prog, shots=n_samples,disable_port_permutation=True)

Code for P:

sf.ops.S2gate(1.0) | (q[0], q[4])

sf.ops.S2gate(1.0) | (q[1], q[5])

sf.ops.MeasureFock() | q

eng = sf.RemoteEngine(“X8”)

return eng.run(prog, shots=n_samples,disable_port_permutation=True)

The fact that near \pm {\pi\over 2}, the function is about half what it should be makes me think I’ve coded something wrong. Is there a way that I could get the X8 calculation to match the analytical result more closely?