V.3.3: Error: Grad only applies to real scalar-output functions

I recently discovered an issue with the V.3.3 codebook. When attempting to solve it, it yields an error message that “Grad only applies to real scalar-output functions”. I think my code is correct (see below).

numbers = [1,3,4,7,13]
total_sum = sum(numbers)
# Number of qubits needed (each number is represented by a qubit)
n_wires = len(numbers)
# Define the mixer Hamiltonian
mixer_h = qml.Hamiltonian([1.0] * n_wires, [qml.PauliX(i) for i in range(n_wires)])
dev = qml.device('default.qubit', wires=n_wires)
# Define the hyperparameters for the optimizer:
p=20
params=np.ones((p, 2),requires_grad=True) * 0.5
step_size=0.001
max_steps=150

def build_cost_number_partition(numbers):
    """Function to build the cost Hamiltonian of a number partition problem.

    Args:
        numbers (list): A list with the numbers we want to divide in two groups with equal sums.
        
    Returns:
        (qml.Hamiltonian): The cost Hamiltonian of the number partition problem
    """      
    ##################
    # YOUR CODE HERE #
    ##################
    coeffs = [ni * nj for ni in numbers for nj in numbers]
    components = [qml.Z(i) @ qml.Z(j) for i in range(len(numbers)) for j in range(len(numbers))]
    return qml.Hamiltonian(coeffs, components)
    
def optimizer_number_partition(params,p,step_size,max_steps,cost_h):
    """Optimizer to adjust the parameters.

    Args:
        params (np.array): An array with the trainable parameters of the QAOA ansatz.
        p (int): Number of layers of the QAOA ansatz.
        step_size (float): Learning rate of the gradient descent optimizer
        max_steps (int): Number of iterations of the optimizer.
        cost_h (qml.Hamiltonian): The cost Hamiltonian.

    Returns:
        (np.array): An array with the optimized parameters of the QAOA ansatz.
    """      
    optimizer = qml.AdamOptimizer(step_size) 
    for _ in range(max_steps):
        params, _,_ = optimizer.step(cost_function, params,p,cost_h)
    return params

def QAOA_number_partition(params,p,step_size,max_steps,cost_h):
    """QAOA Algorithm to solve the number partition problem.

    Args:
        params (np.array): An array with the trainable parameters of the QAOA ansatz.
        p (int): Number of layers of the QAOA ansatz.
        step_size (float): Learning rate of the gradient descent optimizer
        max_steps (int): Number of iterations of the optimizer.
        cost_h (qml.Hamiltonian): The cost Hamiltonian.

    Returns:
        (np.tensor): A tensor with the final probabilities of measuring the quantum states.
        (np.array): The optimized parameters of the QAOA.
    """  
    ##################
    # YOUR CODE HERE #
    ##################
    param = optimizer_number_partition(params, p, step_size, max_steps, cost_h)
    probs = probability_circuit(param, p, cost_h)
    return probs, param

The issue appears to be setting “requires_grad=True” for np.ones, as the error vanishes when that is changed (though the result is then incorrect). I’m not sure if there is something wrong with my code, or if something needs to be changed with the optimizer in order to fix the issue.

Hi!
Thank you for this question, it really made me think.
For now, I have a quick fix, but I want to inquire further about the reasons of this error next week with the team.
Your code is indeed almost right. I checked your Hamiltonian (cost_h) against ours and they match, except for one thing. Your code outputs elements of the form Z(i) @ Z(i) and ours gives Identity(0) instead. For some reason, using Z(0) @ Z(0) instead of Identity(0) gives rise to complex values hence the Error occurs and our evaluator is not capable of discarding the imaginary part (like python does if you run the code locally in your machine).
Take this into account when you are creating your Hamiltonian and it should work.

Let me know if this was helpful.