Hello, I am experimenting with the Fock Backend in StrawberryFields. I am looking to compare the results from a very simple circuit with analytical Fock state probabilities that I have calculated by hand. This is the code snippet:

```
import strawberryfields as sf
from strawberryfields.ops import *
cutoff = 2 ### Increase this
eng = sf.Engine('fock', backend_options={'cutoff_dim': cutoff})
n_modes = 1
circ = sf.Program(n_modes)
with circ.context as q:
Dgate(0.5) | q[0]
state = eng.run(circ).state
print(state.all_fock_probs())
```

I have a few questions about this. Here I start with a low Fock state cutoff of 2. My assumption is that I begin with a truncated Hilbert space, which I then apply operations to. However, when I measure the probabilities, I get probabilities which are not normalised. Furthermore, when I increase the Fock state cutoff (to say 8), the probabilities of the 0 and 1 Fock states do not change, and the overall probability starts to approach 1.

My question is: how is this calculation done? My assumption is that I would truncate the expansion of the Dgate in terms of creation and annihilation operators at something comparable to the Fock state cutoff. Therefore, as I am acting on a truncated Hilbert space, I would expect normalised probabilities, or at least I would expect the probabilities to change as I increase the Fock cutoff. It would seem that the full expansion of the Dgate (taking the Taylor series of the exponential in the documentation sf.ops.Dgate — Strawberry Fields 0.23.0 documentation) to a very high order is calculated, meaning that corrections from higher orders in the expansion effect the probabilities of the lower Fock states?

If the expansion of the exponential to a very high order of a and a^\dagger is done, what affect does the Fock state cutoff have on the simulation?

Also, is it possible to return the coefficients of the Fock states, for example a state vector?

Hopefully this makes sense, and perhaps I am missing something.

Thank you for your help!

Hi @kevinkawchak, thank you for you response, however I do not think that this answers my question. In the example you have given, the cutoff_dim does not change. The original question is about how the calculation of the Fock probabilities is carried out, even in a very simple case, away from a machine learning setting. It would seem that the full expansion (or at least to some high order truncation) of each gate into creation and annihilation operators (a^\dagger and a) is performed, and then only a subset of the marginal Fock probabilities is returned. Please could you elaborate on this?

Hey @simon.williams! I was able to verify what you’re seeing. Just checking in with a colleague here to give you a good answer. Will get back to you soon!

Hi @simon.williams ,

Great question! The cutoff truncates the Fock space. Having low cutoff dimension means your state representation is incomplete and, as such, probabilities will not sum up to one. Basically when you have a low cutoff you’re leaving out a lot of information so it’s normal that your probabilities don’t add up to one. As you increase the cutoff, the overall probability starts approaching one because you’re including more information.

This demo which compares the bosonic backend and the Fock backend shows the effects of having a cutoff that is too low.

The graphs in MrMustard are also great for seeing this. Try reducing and increasing the cutoff, as shown in section 8, and you will see how you start loosing information.

Let me know if this answers your question!

To reinforce @CatalinaAlbornoz’s answer, here’s what another colleague wrote to me:

*The calculation of operators in the Fock basis in strawberryfields is done recursively using methods from thewalrus.fockgradients (see source code in strawberryfields here). The recursive technique is introduced in this paper.*

*Let’s take a step back. One way to compute an infinite-dimensional operator like the displacement gate up to a cutoff is, as the question implies, to write the raising and lowering operators of the quantum harmonic oscillator up to a cutoff, then take a matrix exponential of a sum of those operators according to the definition of the displacement operator. An issue with this method, however, is that taking the matrix exponential of truncated infinite dimensional operators is not the same as taking the matrix exponential of the infinite operators then truncating. This leads to incorrect entries in the final result, especially close to the cutoff.*

*By contrast, the method in this paper and the one implemented in strawberryfields/thewalrus shows that the one can express the displacement operator Fock basis elements (and other Gaussian operators) **exactly* using recursive methods. This means that you do not even need to take a matrix exponential. You can compute each element of the operator based on previous elements. One consequence of this is that the operators up to the cutoff are technically not unitary (since they are only truly unitary for infinite cutoff), and that density matrices may not be fully normalized if the cutoff is not high enough to capture all the probabilities; however, the elements of the matrix are exact. This means one can still compute probabilities or elements of a density matrix exactly without needing to go to a larger cutoff if you do not need to.

*One drawback of this method is that for very large cutoffs, numerical instability can be introduced due to accumulated error in the recursive method. This has been discussed in this paper, and some alternatives are proposed.*

I hope this helps!

Hi @isaacdevlugt and @CatalinaAlbornoz

Thank you both for your detailed responses. I have had some time to go over the resources you provided and I understand the approach taken now.

Thank you for your help, and I will be in touch on the forum if I have any further questions.

Cheers,

Simon

1 Like

Awesome! Glad we could help