Optimizer is not updating Parameters

Hey,

I am having issues with the RMSPropOptimzer.steps() Method, which is not optimizing my parameters.
I am trying to run this notebook on my local machine:

Tutorial Embedding Generalization

This is the cost function:

def cost(weights, A=None, B=None):

    aa = overlaps(weights, X1=A, X2=A)
    bb = overlaps(weights, X1=B, X2=B)
    ab = overlaps(weights, X1=A, X2=B)

    d_hs = -2 * ab + (aa + bb)
    print("cost: ", 1 - 0.5 * d_hs)
    return 1 - 0.5 * d_hs

And this is how the optimizer is called:

optimizer = qml.RMSPropOptimizer(stepsize=0.01)
batch_size = 5
pars = init_pars

cost_list = []
for i in range(400):

    # Sample a batch of training inputs from each class
    selectA = np.random.choice(range(len(A)), size=(batch_size,), replace=True)
    selectB = np.random.choice(range(len(B)), size=(batch_size,), replace=True)
    A_batch = [A[s] for s in selectA]
    B_batch = [B[s] for s in selectB]

    # Walk one optimization step
    pars = optimizer.step(lambda w: cost(w, A=A_batch, B=B_batch), pars)
    print(pars)

pars is initalized like:

# generate initial parameters for the quantum component, such that
# the resulting number of trainable quantum parameters is equal to
# the product of the elements that make up the 'size' attribute
# (4 * 3 = 12).
init_pars_quantum = np.random.normal(loc=0, scale=0.1, size=(4, 3))

# generate initial parameters for the classical component, such that
# the resulting number of trainable classical parameters is equal to
# the product of the elements that make up the 'size' attribute.
init_pars_classical = np.random.normal(loc=0, scale=0.1, size=(2, 4))

pars = [init_pars_classical, init_pars_quantum]

When printing out the pars variable after every optimization step, they are just staying the same, even though the cost function evaluates different loss values in every step. I also receive the following warning: /home/user/.local/lib/python3.11/site-packages/pennylane/_grad.py:110: UserWarning: Attempted to differentiate a function with no trainable parameters. If this is unintended, please add trainable parameters via the 'requires_grad' attribute or 'argnum' keyword.

I am using Pennylane 0.33.1.

Thanks for your help!

Hello @jo87casi ! Welcome to the forum!

As you can see from the warning message, the optimizer is having trouble because it is not recognizing the parameters as trainable, therefore, no updates in the parameters. In that case, at the step you define pars, have you tried setting requires_grad = True when you call the np.random functions?

Let me know if it helps! :slight_smile:

Edit: If I may suggest, I think it would also be easier if you defined the cost function not as an anonymous function at first. It will help you make a code easier to read and debug! :wink:

Hey @ludmilaaasb, thanks alot for your answer.
Yes, i tried to add the requires_grad=True parameter, but it didn’t helped. Also it seems that it’s True by default. Of course i could write a wrapper around the cost function, but that doesn’t make the debugging more easier for me, or do you know how else i can get rid of the anonymous function?

Here you can see the output if i print the pars variable in every step.

I can also see that the cost function evaluates differently in every step, but the parameters to optimize do not change. What i pass in as a w is also returned by the optimizer.

For further Information you can also see this tutorial under the community demos on the pennylane webpage (third most recent demo), named there as “Generalization of Quantum Metric Learning Classifiers”.

Thanks alot for your help.

I solved my problem. Since i’m using a nested list, i thought i might need to flatten the parameters before passing them to the optimizer and then reshape them back after the optimization step.
I used the following steps and the training seems to work now:

# Flatten the parameters before passing them to the optimizer
flat_pars = np.concatenate([pars[0].flatten(), pars[1].flatten()])

# Inside the optimization loop, after the optimizer step
new_flat_pars = optimizer.step(lambda w: cost(reshape_pars(w), A=A_batch, B=B_batch), flat_pars)

# Reshape the parameters back to their original structure
pars = reshape_pars(new_flat_pars)

# Define a helper function to reshape the flat parameters back to the nested structure
def reshape_pars(flat_pars):
    split_index = pars[0].size  # Index at which to split the flat parameters
    new_pars_classical = flat_pars[:split_index].reshape(pars[0].shape)
    new_pars_quantum = flat_pars[split_index:].reshape(pars[1].shape)
    return [new_pars_classical, new_pars_quantum]

Thank you for sharing the solution here @jo87casi !
Pleas let us know if you run into further issues.