Many ‘standard’ optimizers allow for selective freeze of parameters. It is useful in the R&D phase when it is not exactly clear what degrees of freedom matter and it allows for experimentation w/o changing code too much. I tried to do it with `optimizer = NesterovMomentumOptimizer(stepsize=0.4)`

for the case of 5 parameters:

```
params = np.array([0.3, -0.3, 0.1, 0.2, -0.2], requires_grad=True)
fixPar = [0, 1, 4] # Define the frozen parameters' indices
```

The solution I found (w/ help from ChatGPT) is a bit clumsy: gradient is computed for all 5 params, but then I manually mask the change for the frozen params.

Q1: is this a valid solution for PennyLane?

Q2: Is there a more elegant solution, where I just pass fixPar to optimizer?

This is the full code

```
import pennylane as qml
from pennylane import numpy as np
from pennylane.optimize import NesterovMomentumOptimizer
# Define the device and quantum circuit
dev = qml.device("default.qubit", wires=3)
@qml.qnode(dev)
def circuit(params):
qml.RX(params[0], wires=0)
qml.RY(params[1], wires=1)
qml.RZ(params[2], wires=2)
qml.RX(params[3], wires=0)
qml.RY(params[4], wires=1)
qml.CNOT(wires=[0, 1])
qml.CNOT(wires=[1, 2])
return qml.expval(qml.PauliZ(0))
def cost(params):
return circuit(params)
# Initialize parameters
params = np.array([0.3, -0.3, 0.1, 0.2, -0.2], requires_grad=True)
# Define the frozen parameters' indices
fixPar = [0, 1, 4]
# Create a mask to freeze specific parameters based on fixPar
param_mask = np.ones_like(params)
param_mask[fixPar] = 0
# Save the original values of the frozen parameters
original_values = params.copy()
# Draw and print the circuit
print("Initial circuit:")
print(qml.draw(circuit)(params))
# Initialize the optimizer
optimizer = NesterovMomentumOptimizer(stepsize=0.4)
# Number of optimization steps
steps = 100
for i in range(steps):
# Compute the gradient
grad = qml.grad(cost)(params)
# Update only the optimizable parameters based on the gradient
update_step = np.zeros_like(params)
update_step[param_mask == 1] = optimizer.stepsize * grad[param_mask == 1]
# Update parameters
params -= update_step
# Calculate current cost for monitoring
current_cost = cost(params)
if (i + 1) % 10 == 0:
print(f"Step {i+1}: cost = {current_cost}")
print(f"Optimized parameters: {params}")
```