Attribute error - gradeint descent on PQCs

Doing some basic gradient descent with parameterised quantum circuits and been getting the following error:

AttributeError: 'tensor' object has no attribute '_id'

Any ideas as to why this may be? I have pasted my code below for more information:


import pennylane as qml 
from pennylane import numpy as np
import tensorflow as tf

n_wires = 4 
n_layers = 1

dev = qml.device('default.qubit', wires= n_wires)
dev1 = qml.device('default.qubit', wires= n_wires)

@qml.qnode(dev, interface="tf")
def phi_circ(weights): #generates the random vector phi
    
    for i in range(4):
        
        qml.RX(weights[i], wires=i)
        qml.RY(weights[4+i], wires=i)
        qml.RZ(weights[8+i], wires=i)
    
    return [qml.expval(qml.Identity(wires=i)) for i in range(n_wires)]


@qml.template
def layer(weights, wires):
    
    for i in range(4):   
        qml.RX(weights[i], wires=i)
        
    for i in range(4):    
        qml.RZ(weights[4+i], wires=i)
    
    qml.CZ(wires=[0,1])
    qml.CZ(wires=[0,2])
    qml.CZ(wires=[0,3])
    qml.CZ(wires=[1,2])
    qml.CZ(wires=[1,3])
    qml.CZ(wires=[2,3])
    

@qml.qnode(dev1, interface="tf")    
def psi_circ(thetas): #generates the parameterised vector psi
    
    for i in range(n_layers):
        layer(weights = thetas, wires= range(n_wires) )
    
    return [qml.expval(qml.Identity(wires=i)) for i in range(n_wires)]


random  = tf.constant( np.random.uniform(0, 2*np.pi, 12)  )
phi_circ(random)
phi = tf.dtypes.cast(dev.state, tf.complex128)

init_thetas = np.random.uniform(0, 2*np.pi, 8)
thetas = tf.Variable(init_thetas)

def eplison(thetas):
    
    psi_circ(thetas)
    psi = tf.dtypes.cast(dev1.state, tf.complex128)

    e = tf.dtypes.cast(np.vdot(psi - phi, psi - phi), tf.complex128)

    return np.real(e)

opt = tf.keras.optimizers.SGD(0.4)

cost = lambda: eplison(thetas)

for step in range(50):
    opt.minimize(cost, thetas)


Hi Zohim! It looks like there are some NumPy functions being used in your cost function. Within the cost, all functions on differentiable tensors must be done in TensorFlow, to ensure the output remains differentiable.

The following cost function works for me:

def eplison(thetas):
    psi_circ(thetas)
    psi = dev1.state
    e = tf.tensordot(tf.math.conj(psi - phi), psi - phi, axes=1)
    return tf.math.real(e)

Note that the np.vdot has been replaced with tf.math.conj and tf.tensordot, while np.real has been replaced with tf.math.real.

Finally, you’ll need to change the device to be one that uses TensorFlow internally, if you want the accessed state dev.state to be differentiable. You can use the default.qubit.tf device:

dev1 = qml.device('default.qubit.tf', wires=n_wires)

After making the above changes to the cost function, and changing the device, the script now works for me :slightly_smiling_face:

Thank you for clarifying this. I was under the assumption that importing the wrapped version of numpy from PennyLane created tensors with requires_grad=True and hence you could use the numpy commands?

I was under the assumption that importing the wrapped version of numpy from PennyLane created tensors with requires_grad=True and hence you could use the numpy commands?

That’s correct, but only when you are using NumPy/autograd QNodes (interface="autograd") and NumPy/Autograd optimizers (the optimizers built in to PennyLane).

If you are instead using the TensorFlow interface and TensorFlow optimizers, you will need to use TensorFlow everywhere, rather than NumPy. In fact, when using TensorFlow, you can simply import standard NumPy :slightly_smiling_face: (import numpy as np).