Mushahid asked me to make this post since he couldn’t interact with the discussion forum on his computer, his question is as follows:

Hello, I am trying to understand how gradients are calculated for different gates when an interface is specified. When i use pytorch as an interface and when i don’t use an interface i get different results despite having a compute_matrix with qml.math, grad_method = “A”, parameter_frequencies = [(1,)] and an adjoint method. I read somewhere on pennylane documentation that i would need a grad_method and parameter_frequencies. Any reason why i might get different results when i specify pytorch as an interface from when i don’t specify anything as an interface?
Here is my code :

I think I know the answer to your problem but just want to make sure I can replicate thew behaviour you’re seeing to be sure there aren’t any bugs. Can you include everything I need to run the above example? I think (?) I’m just missing some of your imports (not sure where stack_last is coming from, for example).

I’m not sure if this was intentional or not, but comparing/equating the results of .backward() and .grad() isn’t apples to apples. backward is populating the grad field of the variable you want to differentiate. You can verify this by outputting phi.grad after you call backward :

With that sorted, now we can compare what torch.Tensor.grad and qml.grad output. PyTorch is using backprop, whereas qml.grad will call upon the “best” differentiation method, where the hierarchy of “best” methods is:

backprop

parameter-shift

finite-diff

In your case, “best” is “parameter-shift”:

print(circuit1.diff_method)
print(circuit1.best_method_str(dev, circuit.interface))
'''
best
parameter-shift
'''

My best guess at any discrepancy in the numeric values of both gradients that you’re calculating is simply numerical precision . Let me know if this helps!

Also, when i calculate the gradients i get tensor([2.9802e-08]) and
[0.] for Torch and and without torch. Based on results, it seems like its simply numerical precision.

Is there any specific way we have to import or use classes from another file when writing unit tests. When I import GPI2 when writing and running a unit test, I get the following error: TypeError: Gradient only defined for scalar-output functions. Output had shape: (1,). This does not occur when i have the class inside the unit test function. What could be causing this?