Hi there,

I am observing a systematic increase in my loss function (square loss) when the parameter shift is big (1.0 or 0.1) but not for smaller parameter shifts. Is there any physical/numerical reason for this?

Here is a minimal working example, where the only parameterized gate is a displacement gate and the target is the approximation of a single function value f(0.5)=1.0

```
import pennylane as qml
from pennylane import numpy as np
dev = qml.device("strawberryfields.fock", wires=1, cutoff_dim=10, shots=100)
@qml.qnode(dev)
def quantum_neural_net(parameter, x=None):
# Encode input x into quantum state
qml.Displacement(x, 0.0, wires=0)
qml.Displacement(parameter, 0.0, wires=0)
return qml.expval(qml.X(0))
lr = 0.001
input = 0.5
goal = 1.0
# starting value of displacement parameter
parameter = -0.1
costs = []
steps = 500
for it in range(steps):
# feed forward with the parameter value and calculate loss
output = quantum_neural_net(parameter, x=input)
loss=(goal - output) ** 2
costs.append(loss)
# feedforward at shifts of the parameter and calculate partial derivative
output_plus = output = quantum_neural_net(parameter+s, x=input)
output_minus = output = quantum_neural_net(parameter-s, x=input)
output_gradient = 1./(2.*s)*(output_plus-output_minus)
# calculate gradient of loss with respect to parameter using chain rule
gradient = -2*(goal-output)*output_gradient
# update parameter with simple gradient descent
parameter -= gradient*lr
```

This code leads to the following figures for different values of s (loss as a function of steps):

Does anyone have an idea of why a big s, leads to this error? Any help would be greatly appreciated!

Kind regards,

Martin Knudsen