Optimiser returning NaN

Hi,

I’m trying my hand at the QHack VQE problems, specifically vqe_200.
But the optimisers I’ve tried all keeps returning NaN, due to some division by zero.
The cost function and parameters are valid, but the optimiser output is not?
Anyone has any guesses what’s going on?

Code snippet:

for n in range(5):
  params = np.random.uniform(low=-100, high=100, size=(num_qubits))
  print("params_before:", params)
  print("cost_fn(params):", cost_fn(params))
  params = opt.step(cost_fn, params)
  print("params_after:", params)
  print("cost_fn(params):", cost_fn(params))
  print("-----------")

Console output:

>> python vqe_200_template.py < 1.in

params_before: [ 35.72144511 -18.02651368]
cost_fn(params): 5.586708427318174
<mydir>\.conda\envs\pennylane\lib\site-packages\autograd\numpy\numpy_vjps.py:85: RuntimeWarning: divide by zero encountered in double_scalars
  defvjp(anp.arcsin, lambda ans, x : lambda g: g / anp.sqrt(1 - x**2))
params_after: [35.72316766         nan]
cost_fn(params): nan
-----------
params_before: [-66.81364233   2.97083332]
cost_fn(params): -0.031097652212978718
params_after: [-66.81370653          nan]
cost_fn(params): nan
-----------
params_before: [-15.49557062 -99.76920866]
cost_fn(params): 10.650998162365017
params_after: [-15.49674676          nan]
cost_fn(params): nan
-----------
params_before: [74.82232481 86.71923288]
cost_fn(params): 2.5956264476938404
<mydir>\.conda\envs\pennylane\lib\site-packages\autograd\numpy\numpy_vjps.py:85: RuntimeWarning: invalid value encountered in double_scalars
  defvjp(anp.arcsin, lambda ans, x : lambda g: g / anp.sqrt(1 - x**2))
params_after: [74.82323742         nan]
cost_fn(params): nan
-----------
params_before: [ 60.48168882 -21.76188975]
cost_fn(params): 3.74874088006935
params_after: [60.48246261         nan]
cost_fn(params): nan
-----------
nan

Hi @BillyLjm,

based on the warning messages in your provided output, it appears that the division by zero is being produced by the division by sqrt(1-x**2). You should verify that your value of x is not approaching 1, as this would lead to division by zero, and hence nans

I looked through my math functions again.
Turns out it came from doing np.arcsin(1) in the ansatz
which I guess somehow became g/sqrt(1-1**2) after autograd

Thanks @nathan! For pointing me in the right direction :smiley:

Yes, the derivative of arcsin(x) is 1/sqrt(1-x**2), so likely the error occured because autograd was taking the gradient of arcsin.