Using the state vector directly

I am talking about the state vector if it could be differentiable and how can I do it? @nathan

Sorry, replied to the wrong subject. If you want to try hacking in a differentiable state vector, you’ll have to inspect the code of https://github.com/XanaduAI/pennylane/blob/master/pennylane/beta/plugins/default_tensor_tf.py

Unfortunately, I can’t really provide you much more specific guidance beyond what I’ve already written above (about having to register custom gradients of the _state function with Tensorflow)

Hi @kareem_essafty, the master version of PennyLane on GitHub has a new feature called the PassthruQNode. Using this QNode, with the default.tensor.tf device, should do what you want. For example, consider the following:

import tensorflow as tf
import pennylane as qml
from pennylane.qnodes import PassthruQNode

dev = qml.device('default.tensor.tf', wires=2)


def circuit(params):
    qml.RX(params[0], wires=0)
    qml.RX(params[1], wires=1)
    qml.CNOT(wires=[0, 1])
    return qml.expval(qml.PauliZ(0))

qnode = PassthruQNode(circuit, dev)
params = tf.Variable([0.3, 0.1])


with tf.GradientTape() as tape:
    tape.watch(params)
    qnode(params)
    state = dev._state

grad = tape.gradient(state, params)

print("State:", state)
print("Gradient:", grad)

This gives the output:

State: tf.Tensor(
[[ 0.98753537+0.j          0.        -0.04941796j]
 [-0.00746879+0.j          0.        -0.14925138j]], shape=(2, 2), dtype=complex128)
Gradient: tf.Tensor([-0.09933467 -0.09933467], shape=(2,), dtype=float32)
1 Like

Thanks @josh! The part about explicitly declaring it to be a PassThruQNode was the missing piece of my earlier suggestion

You saved me. Thank you very much
I had to use qiskit directly and compute everything from the scratch

Hi @kareem_essafty,

Glad we could help. Just for clarity, do you mean that you had previously been using qiskit directly and computing everything by hand, but now you can automate that process using the suggestions above?

Hi,
Yeah I used qiskit within keras layers and also specified the gradient functions just to use the state vector and differentiate it with respect to the loss