Basic rotation tensorflow

Hi xandau team!

I have been following the basic tutorial and was struggling to plug in to TensorFlow. Please kindly provide feedback on my code

CODE

dev1 = qml.device(“default.qubit.tf”, wires=1)

@qml.qnode(dev1, interface=“tf”)
def circuit(params):
qml.RX(params[0], wires=0)
qml.RY(params[1], wires=0)
return qml.expval(qml.PauliZ(0))

def cost(params):
return np.sqrt(np.abs(circuit(params) - 1))

params = [0.54, 0.12]
dcost = qml.grad(params)

Also here are the links

https://pennylane.readthedocs.io/en/stable/introduction/interfaces.html

Hello @quantopia! Thank you for using the Discussion Forum.

Could you please provide the error that you encountered? It will be easier to debug and provide assistance this way.

PS. Please make sure to install TensorFlow >=2.0 pip install tensorflow>=2.0, if you haven’t already :slight_smile:

Hi Togan, I did not see an error but dcost is not what I expected. When I print dcost it shows:

<pennylane._grad.grad at 0x7fa8c0b69710>

I’d like to calculate the gradient of the cost function so shouldn’t this be a number?

PS thanks yes I have installed tensorflow.

Hi @quantopia!

As described in the first tutorial you sent,

The function grad() itself returns a function, representing the derivative of the QNode with respect to the argument specified in argnum. In this case, the function circuit takes one argument (params), so we specify argnum=0. Because the argument has two elements, the returned gradient is two-dimensional. We can then evaluate this gradient function at any point in the parameter space.

Hence, to get the gradient, you need to implement the following:

dev1 = qml.device("default.qubit.tf", wires=1)

@qml.qnode(dev1)
def circuit(params):
    qml.RX(params[0], wires=0)
    qml.RY(params[1], wires=0)
    return qml.expval(qml.PauliZ(0))

dcircuit = qml.grad(circuit, argnum=0)
print(dcircuit([0.54, 0.12]))

As for calculating the gradients using TensorFlow, the TensorFlow-interfacing QNode acts like any other TensorFlow function, where the standard method used to calculate gradients in eager mode with TensorFlow can be used.

Therefore, you can implement the following:

dev1 = qml.device("default.qubit.tf", wires=1)

@qml.qnode(dev1, interface="tf")
def circuit(params):
    qml.RX(params[0], wires=0)
    qml.RY(params[1], wires=0)
    return qml.expval(qml.PauliZ(0))

params = tf.Variable([0.54, 0.12])

with tf.GradientTape() as tape:
    # Use the circuit to calculate the loss value
    loss = circuit(params)

params_grad = tape.gradient(loss, params)

print(params_grad)

where you can directly define the cost as the output of the QNode or circuit.

The same goes for the optimization step, both the Pennylane built-in and TensorFlow interfaces have their own optimizers and way of implementation, you can find more on the TensorFlow interface here.

Thanks Togan the code you shared helps. But I cannot get it to work with the cost function:

dev1 = qml.device("default.qubit.tf", wires=1)

@qml.qnode(dev1, interface="tf")
def circuit(params):
    qml.RX(params[0], wires=0)
    qml.RY(params[1], wires=0)
    return qml.expval(qml.PauliZ(0))

params = tf.Variable([0.54, 0.12])

def cost(params):
    return np.sqrt(np.abs(circuit(params) - 1))

with tf.GradientTape() as tape:
    # Use the circuit to calculate the loss value
    loss = cost(params)

params_grad = tape.gradient(loss, params)

print(params_grad)

gives
AttributeError: ‘numpy.float64’ object has no attribute ‘_id’

Once I can get this working Ill be able to train my model.

Hi @quantopia! It looks like there are NumPy functions being used in your cost (np.sqrt & np.abs). To ensure the output remains differentiable, all functions used within the cost function on differentiable tensors must be done in TensorFlow.

You may try the following:

dev1 = qml.device("default.qubit.tf", wires=1)

@qml.qnode(dev1, interface="tf")
def circuit(params):
    qml.RX(params[0], wires=0)
    qml.RY(params[1], wires=0)
    return qml.expval(qml.PauliZ(0))

params = tf.Variable([0.54, 0.12])


def cost(params):
    return tf.math.sqrt(tf.math.abs(circuit(params) - 1))

with tf.GradientTape() as tape:
    # Use the circuit to calculate the loss value
    loss = cost(params)

params_grad = tape.gradient(loss, params)

print(params_grad)

Please let us know if the error continuous :slight_smile:

Thanks Togan this has helped me a lot. I’m looking forward to using more Penny Lane!

1 Like

@quantopia glad you have solved the issue! If you have any more questions, don’t hesitate to create a new forum post.