Ah apologies for not being clear. I mean if I created a cost function that itself depends on a loss, something that could look something like:
Ah, I see!
Currently, differentiating loss functions that depend on the gradient is not supported in PennyLane. This is because the second derivative of QNodes is not yet supported, however this is something we are working on adding: https://github.com/PennyLaneAI/pennylane/pull/961
There is one exception, however: if you use diff_method="backprop"
, then higher derivatives and these types of loss functions will work. The following simulators support backpropagation:
default.qubit.tf
(must be used withinterface="tf"
)default.qubit.autograd
(must be used withinterface="autograd"
)strawberryfields.tf
(must be used withinterface="tf"
).
Note that in this case, ‘autograd’ refers to the NumPy-based Autograd library that is available via from pennylane import numpy as np
, not PyTorch Autograd! Apologies for the confusion.
At the moment, there is no PyTorch-compatible simulator that supports backprop, since this requires complex number support which is not yet fully available in PyTorch.