I’m running a machine learning algorithm in which as part of the loss function calculation I’m calling `qml.jacobian(circuit)(xnorm, params)`

. it works fine when running on `qml.device("default.qubit", wires=nqubits)`

but if I change it to `qml.device("lightning.qubit", wires=nqubits)`

I receive Nan values for some elements in the output.

I use the `@qml.qnode`

decorator for the circuit:

```
@qml.qnode(dev)
def circuit(coll, params):
qml.StatePrep(coll, wires=range(nqubits))
dictparams=dict(zip(qc.parameters, params))
ansatz(params=dictparams,wires=range(nqubits))
prob=qml.probs(wires=range(nqubits))
return prob
```

and the loss function (until the error) is: (pnp is: `from pennylane import numpy as pnp`

)

```
def loss_fn(params, circuit=circuit):
prob=circuit(xnorm, params)
u=pnp.sqrt(prob)
du2dx_jac,duda=qml.jacobian(circuit)(xnorm, params)
...
```

in runtime I receive those warnings which may help:

```
/envs/cuda-quantum/lib/python3.10/site-packages/autograd/numpy/numpy_vjps.py:85: RuntimeWarning: divide by zero encountered in divide
defvjp(anp.arcsin, lambda ans, x : lambda g: g / anp.sqrt(1 - x**2))
/envs/cuda-quantum/lib/python3.10/site-packages/pennylane/numpy/tensor.py:155: RuntimeWarning: invalid value encountered in multiply
res = super().__array_ufunc__(ufunc, method, *args, **kwargs)
/envs/cuda-quantum/lib/python3.10/site-packages/pennylane/numpy/tensor.py:155: RuntimeWarning: invalid value encountered in at
res = super().__array_ufunc__(ufunc, method, *args, **kwargs)
```

and the `du2dx_jac`

I get is:

```
[[ nan nan -0.00908817 ... -0.02307965 -0.00254096
-0.00768597]
[ nan nan -0.00607349 ... -0.0350922 -0.0102256
-0.00871634]
[ nan nan -0.00342172 ... 0.02416978 -0.02432349
-0.01001875]
...
[ nan nan 0.01431378 ... 0.00331552 0.05455701
-0.02419741]
[ nan nan -0.00539168 ... 0.04763316 0.09883949
0.03659902]
[ nan nan -0.00086362 ... -0.10641249 -0.05992528
0.16751464]]
```

This behavior doesn’t happen when using `default.qubit`

device.

how can I overcome this? I want to use the lightning devices in order to easily run on GPU later.