One challenge with the definition of cost1
is that its gradient needs to support automatic differentiation and this involves both classical processing (using sklearn
) and quantum computation (QNode
using qml.jacobian
). As sklearn does not support automatic differentiation, his might not be feasible, unfortunately. For further points see this relevant discussion on using sklearn
in a cost function. As suggested there, using a supported PennyLane interface (Torch/TF) or using layers from qml.qnn
package with built-in optimizers & functions could be a feasible approach.
Printing cost1
: one way to do this is to evaluate the cost function right before/after the update of weights
.
Hope this gives an idea for moving forward!
For further reference, could you help out with a couple of small adjustments when sharing code?
- Adjusting formatting of the code (e.g., enclosing strings with
""
and''
instead of““
, double-checking that imports are correct and the code snippet can be executed independently) - Placing code into a highlighted code blocks so that the code renders and indentation is correct
Overall this helps with quickly recreating the case that is shared and also helps other users easily get up to speed with parts of the solution.
Thank you so much!