Using PyTorch Gradients

You might need to elaborate here, but if you mean simply compute the gradient of a hybrid classical-quantum cost function, PennyLane supports both autograd (the default) and TensorFlow.

Ah apologies for not being clear. I mean if I created a cost function that itself depends on a loss, something that could look something like:

def cost(circuit_out, circuit_in):
    grad = torch.autograd.grad(outputs=circuit_out, inputs=circuit_in)
    return np.sum(grad - circuit_in)

You would then treat this cost “normally”, as you have in your examples. I ask as a follow up to a comment made here :

Yes, that constraint comes from Autograd, which is the default interface in PennyLane

I just wanted to be sure I interpreted this correctly, and that this is still the case.

Can you try the following code snippet, and let me know if it works for you?

Yes, that works now! Thank you very much!