Hi,

I’m trying to do a QAOA optimization using the `default_qubit_tf`

device. I have the following QAOA implementation

```
dev = qml.device('default.qubit.tf', wires = len(G.nodes), shots = 5000)
@qml.qnode(dev, interface = "tf", diff_method = "backprop")
def circuit(params, **kwargs):
params = params.numpy()
for i in range(len(G.nodes)):
qml.Hadamard(wires = i)
for j in range(p):
U_C(params[0][j])
qaoa.mixer_layer(params[1][j], mixer_h)
return qml.probs(wires = list(range(len(G.nodes))))
```

where `U_C`

implements the cost Hamiltonian unitary operations and `qaoa.mixer_layer`

the mixer Hamiltonian ones. The parameters `params`

are initially implemented as

```
params = tf.Variable(0.01*np.random.rand(2, p))
```

so that’s why I redefine them inside the circuit, so I can provide them to the gates as `numpy`

variables (I don’t know if this is appropriate). I provide the cost function as

```
def cost_function(params):
result = circuit(params).numpy()[0]
counts = {}
for i in range(len(result)):
counts[f"{i:010b}"] = result[i]
E = 0
for bitstring in counts.keys():
x = string_to_list(bitstring)
energy = MaxCut_cost(G, x)
E += -energy*counts[bitstring]
return tf.convert_to_tensor(np.array([E]))
```

which does the following

- Takes the probability outcomes coming from the circuit, and turns them into numpy variables.
- Defines a dictionary that saves the probabilities obtained in the circuit.
- For each bitstring obtained, computes the energy with
`MaxCut_cost`

which defines the MaxCut cost funtion for the graph`G`

I’ve defined. - Finally, returns the energy as a
`tensor`

variable, exactly equal to the one I’m introducing through the parameters.

First of all, I’m not very familiar with `tensorflow`

so that’s why (probably) my implementation is terrible. Secondly, I tried to do the optimization in a similar fashion to what I’m used to do with the `default_qubit`

device, i.e., I define an optimizer `opt = Optimizer(lr = 0.1)`

and then calculate the new parameters as `params = opt.step(cost_function, params)`

, but it gives me the following error

`TypeError: Can't differentiate w.r.t. type <class 'tensorflow.python.ops.resource_variable_ops.ResourceVariable'>`

I would really appreciate your help. The reason why I’m trying to use this `tensorflow`

approach is because I want to optimize for large values of p, i.e., the number of repetitions defined in QAOA, and in the documentation it’s specified that this device is the appropriate one when I want to work with a big number of parameters.

If you have further questions about my implementation, I’ll be happy to specify them in more detail. Thanks in advance!

Cheers,

Javier.