Count no of func evaluations of TF optimizer

I am applying SGD as below and steps are coming like 1. I don’t want to specify the number of steps in the beginning. I need to check how many steps the optimizer takes to optimize the function.

theta=tf.Variable(theta, dtype=tf.float64)
opt = tf.keras.optimizers.SGD(learning_rate=0.001)

with tf.GradientTape() as tape:
loss = tf.abs(circuit(theta) - 0.5)**2

gradients = tape.gradient(loss, [theta])
opt.apply_gradients(zip(gradients, [theta]))
print(“steps”, opt.iterations.numpy())

Hi @Amandeep,

Thanks for creating a new issue. If I understand correctly, you want to determine how many steps it took for the optimizer to converge. The code example you have above should give you exactly what you need, except you’ll need a loop that encloses the final three lines. (note that I’m unable to reproduce the specific error you describe because your code example is not complete)

The common way to do this would be to either:

  1. choose a large number of steps (more than you think the optimizer would need), and iterate in a for loop.
  2. use a while loop

The key ingredient that your code would still need is a user-defined convergence criterion, i.e., you specify a condition under which you would say it’s done training, and then you could add a simple break statement if this condition holds true at some point during training.

For example, here is a simple pseudocode which checks whether a loss function has crossed a threshold and halts accordingly

converged = False
threshold = 0.001

while not_converged:
    # take an optimization step using `apply_gradients` or another method

    # convergence check
    current_value = ... # you'll have to evaluate the loss again here
    if current_value <= threshold:
        break
print(“steps”, opt.iterations.numpy())