Hi @quantopia!

As described in the first tutorial you sent,

The function grad() itself returns a function, representing the derivative of the QNode with respect to the argument specified in argnum. In this case, the function circuit takes one argument (params), so we specify argnum=0. Because the argument has two elements, the returned gradient is two-dimensional. We can then evaluate this gradient function at any point in the parameter space.

Hence, to get the gradient, you need to implement the following:

```
dev1 = qml.device("default.qubit.tf", wires=1)
@qml.qnode(dev1)
def circuit(params):
qml.RX(params[0], wires=0)
qml.RY(params[1], wires=0)
return qml.expval(qml.PauliZ(0))
dcircuit = qml.grad(circuit, argnum=0)
print(dcircuit([0.54, 0.12]))
```

As for calculating the gradients using TensorFlow, the TensorFlow-interfacing QNode acts like any other TensorFlow function, where the standard method used to calculate gradients in eager mode with TensorFlow can be used.

Therefore, you can implement the following:

```
dev1 = qml.device("default.qubit.tf", wires=1)
@qml.qnode(dev1, interface="tf")
def circuit(params):
qml.RX(params[0], wires=0)
qml.RY(params[1], wires=0)
return qml.expval(qml.PauliZ(0))
params = tf.Variable([0.54, 0.12])
with tf.GradientTape() as tape:
# Use the circuit to calculate the loss value
loss = circuit(params)
params_grad = tape.gradient(loss, params)
print(params_grad)
```

where you can directly define the cost as the output of the QNode or circuit.

The same goes for the optimization step, both the Pennylane built-in and TensorFlow interfaces have their own optimizers and way of implementation, you can find more on the TensorFlow interface here.