I’m trying to make a QGAN with a quantum generator and a classical NN as the discriminator but I’m having trouble fetching the gradients of the gen_weights in a trainable circuit.

However, finding the gradients of the discriminator’s trainable variables work. I’ve tested each loss function and it works as well (I’m using tf.BinaryCrossEntropy). I have a hunch that the problem here is that the generator loss isn’t being differentiated beyond the point of the classical NN but I’m not sure why this is the case. This is what the structure of each loss looks like:

- disc_loss: generate array of data from gen_circuit(gen_weights) --> feed both generated data and real data into discriminator NN --> sum up cross-entropies between the confidence rates of both real and fake data, and ideal
- gen_loss: generate array of data from gen_circuit(gen_weights) --> feed generated data into discriminator NN --> find cross-entropy between discriminator output for fake data and a ones matrix

tf can fetch gradients of each loss with respect to the classical NN weights but can’t reach back further like in the case of gen_loss to calculate the gradients of gen_weights. So how can I fetch these gradients as well? Here is the relevant code sample which return `none`

when fetching the generator gradients but returns the relevant tensor when fetching discriminator’s gradients:

Thank you!

```
cross_entropy = tf.keras.losses.BinaryCrossentropy(from_logits=True)
def generator_loss(fake_output):
"""Calculating loss"""
return cross_entropy(np.ones_like(fake_output), fake_output)
def discriminator_loss(fake_output, real_output):
"""Compute discriminator loss."""
real_loss = cross_entropy(tf.ones_like(real_output), real_output)
fake_loss = cross_entropy(tf.zeros_like(fake_output), fake_output)
total_loss = real_loss + fake_loss
return total_loss
generator_optimizer = tf.keras.optimizers.Adam(1e-4)
discriminator_optimizer = tf.keras.optimizers.Adam(1e-4)
def train_step(equity_data, gen_weights):
"""Run train step on provided image batch."""
with tf.GradientTape() as disc_tape, tf.GradientTape() as gen_tape:
generated_prices = [equity_data[0], gen_circuit(equity_data[0], gen_weights)]
"""Reshaping equity arrays to feed into discrim"""
gen_prices_in_one_arr = reshape_to_one_axis(generated_prices)
real_prices_in_one_arr = reshape_to_one_axis(equity_data)
"""Getting outputs from discrim"""
real_output = discriminator(real_prices_in_one_arr)
fake_output = discriminator(gen_prices_in_one_arr)
"""Calculating loss"""
gen_loss = generator_loss(fake_output)
disc_loss = discriminator_loss(real_output, fake_output)
gradients_of_generator = gen_tape.gradient(gen_loss, gen_weights)
gradients_of_discriminator = disc_tape.gradient(disc_loss, discriminator.trainable_variables)
print(gradients_of_generator )
# generator_optimizer.apply_gradients(
# zip(gradients_of_generator, gen_weights))
# discriminator_optimizer.apply_gradients(
# zip(gradients_of_discriminator, discriminator.trainable_variables))
return gen_loss, disc_loss
```