Hi @feynmannlu,
I can see several things happening here. On one hand you’re not returning the optimized parameters, but instead the result after just one step. On the other hand, you’re telling your optimizer that it should train both the observable and the parameters. If we only want to train the parameters you may need to create an additional function within the optimizer function such that the only argument to that new function are the parameters.
I hope this helps you!
This may be a bit difficult if you aren’t familiar with lambda, so I’ll give you a hint.
params = opt.step(lambda p: cost_function(observable,p), params)
I have noticed, that I got error “The cost function should be close to -1.” in 2 cases: when I made a mistake and when I used not optimal parameters. If anyone have the same issue, try at first qml.GradientDescentOptimizer
with parameters learning_rate, max_iterations from 1.3.a, to verify that everything is correct, and play around with different optimizers and parameters after.