Selective freeze of optimized params

Many ‘standard’ optimizers allow for selective freeze of parameters. It is useful in the R&D phase when it is not exactly clear what degrees of freedom matter and it allows for experimentation w/o changing code too much. I tried to do it with optimizer = NesterovMomentumOptimizer(stepsize=0.4) for the case of 5 parameters:

params = np.array([0.3, -0.3, 0.1, 0.2, -0.2], requires_grad=True)
fixPar = [0, 1, 4] # Define the frozen parameters' indices

The solution I found (w/ help from ChatGPT) is a bit clumsy: gradient is computed for all 5 params, but then I manually mask the change for the frozen params.
Q1: is this a valid solution for PennyLane?
Q2: Is there a more elegant solution, where I just pass fixPar to optimizer?
This is the full code

import pennylane as qml
from pennylane import numpy as np
from pennylane.optimize import NesterovMomentumOptimizer

# Define the device and quantum circuit
dev = qml.device("default.qubit", wires=3)

@qml.qnode(dev)
def circuit(params):
    qml.RX(params[0], wires=0)
    qml.RY(params[1], wires=1)
    qml.RZ(params[2], wires=2)
    qml.RX(params[3], wires=0)
    qml.RY(params[4], wires=1)
    qml.CNOT(wires=[0, 1])
    qml.CNOT(wires=[1, 2])
    return qml.expval(qml.PauliZ(0))

def cost(params):
    return circuit(params)

# Initialize parameters
params = np.array([0.3, -0.3, 0.1, 0.2, -0.2], requires_grad=True)

# Define the frozen parameters' indices
fixPar = [0, 1, 4]

# Create a mask to freeze specific parameters based on fixPar
param_mask = np.ones_like(params)
param_mask[fixPar] = 0

# Save the original values of the frozen parameters
original_values = params.copy()

# Draw and print the circuit
print("Initial circuit:")
print(qml.draw(circuit)(params))

# Initialize the optimizer
optimizer = NesterovMomentumOptimizer(stepsize=0.4)

# Number of optimization steps
steps = 100

for i in range(steps):
    # Compute the gradient
    grad = qml.grad(cost)(params)
    
    # Update only the optimizable parameters based on the gradient
    update_step = np.zeros_like(params)
    update_step[param_mask == 1] = optimizer.stepsize * grad[param_mask == 1]
    
    # Update parameters
    params -= update_step
    
    # Calculate current cost for monitoring
    current_cost = cost(params)
    if (i + 1) % 10 == 0:
        print(f"Step {i+1}: cost = {current_cost}")

print(f"Optimized parameters: {params}")

Hey @Jan_Balewski,

Interesting! Another way to tackle this would be to do some pre-processing yourself and separate the differentiable parameters from the non-differentiable parameters into two different arguments to your circuit and cost function. That way it’s very explicit to PennyLane. That, or you can manually do the differentiation yourself w.r.t. to each parameter, skipping those parameters whose index belongs to fixPar (this is similar to what you’re doing already).

Does the approach you have in your post work for you?

When a project is mature I’d do exactly that: separate the differentiable parameters from the non-differentiable parameters into two different arguments tuples. But when I do R&D, have lot’s of contradicting ideas, and circuit is not a simple pattern (not an Ansatz) it takes a lot of code changes to check those ideas.
Let’s stick to my question Q1: - is this scheme of blocking params proposed by ChatGPT sound? I only tried the code is not crashing, but this particular circuit is a nonsense (parameters are redundant), so I can’t tell if blocking of fixPar works as intended. That is why I asked here. If you have any equivalent alternative solution (end-to-end example), where I only change values in fixPar list , kindly point me to it.

The approach that ChatGPT provided seems to be okay. If I print the parameters out, some are changing, and some aren’t:

for i in range(steps):
    # Compute the gradient
    grad = qml.grad(cost)(params)
    
    # Update only the optimizable parameters based on the gradient
    update_step = np.zeros_like(params)
    update_step[param_mask == 1] = optimizer.stepsize * grad[param_mask == 1]
    
    # Update parameters
    params -= update_step
    
    print(params)
[ 0.3        -0.3         0.1         0.39177022 -0.2       ]
[ 0.3       -0.3        0.1        0.6469308 -0.2      ]
[ 0.3        -0.3         0.1         0.97158135 -0.2       ]
[ 0.3        -0.3         0.1         1.35380862 -0.2       ]
[ 0.3       -0.3        0.1        1.7524312 -0.2      ]
[ 0.3        -0.3         0.1         2.10692672 -0.2       ]
[ 0.3        -0.3         0.1         2.37506244 -0.2       ]
[ 0.3        -0.3         0.1         2.55497844 -0.2       ]
[ 0.3        -0.3         0.1         2.66806092 -0.2       ]
[ 0.3        -0.3         0.1         2.73712576 -0.2       ]
[ 0.3        -0.3         0.1         2.77883655 -0.2       ]
[ 0.3        -0.3         0.1         2.80392252 -0.2       ]
[ 0.3        -0.3         0.1         2.81898701 -0.2       ]
[ 0.3       -0.3        0.1        2.8280285 -0.2      ]
[ 0.3        -0.3         0.1         2.83345399 -0.2       ]
[ 0.3        -0.3         0.1         2.83670942 -0.2       ]
[ 0.3        -0.3         0.1         2.83866271 -0.2       ]

You can just print out the parameters and make sure that it’s working :slight_smile: