Then the gradient-based method will not working.
If I choose the parameter phi_nonlinear to be 0, the optimization will stop when loss=0.25. Why this happens?
Without the Kerr operations, the circuit will be purely Gaussian, and therefore will be unable to fully produce the non-Gaussian target state! This is probably why the cost function is unable to be minimized beyond 0.25.
As @josh mentioned, the Kerr operations are required for the circuit not to be purely Gaussian. There need to be some kind of non-linear component that makes the transformation from the input states to the target states possible. Setting phi_nonlinear to 0 will remove this non-linearity, and thus the network cannot train.
Iβm not actually sure why it stops working when removing the Kerr operations completely. It seems like Nlopt isnβt able to optimize the circuit at all, and it simply gets stuck somehow.