QNN - no trainable params

I have a QNN script working with a 3rd party optimiser but when I try to use PLs inbuilt optimisers, it fails for various reasons.

I have created a small script to try and debug the problem. Please see code below which fails for a different reason to being with.

Thanks for your help.

import pennylane as qml
from pennylane import numpy as np
from sklearn.metrics import mean_squared_error, accuracy_score

n_qubits = 2

n_samples = 5 

n_params = 2 
n_features = 2 #features per sample 

epochs = 5

np.random.seed(11)

dev = qml.device("default.qubit", wires= n_qubits)


@qml.qnode(dev)
def circuit(data: np.ndarray, params: np.ndarray) -> float:

    qml.RX(data[0], 0)
    qml.RY(data[1], 1)

    qml.RX(params[0], 0)
    qml.RY(params[1], 1)

    qml.CZ((0, 1))
    
    return qml.expval(qml.PauliZ(1))

cap = dev.capabilities()
cap["supports_broadcasting"]
def qnn(data, params): 
    return circuit(data, params)


def cost_acc(params, data, y):

    exp_vals = qnn(data, params)
    cost = mean_squared_error(y, exp_vals)
    yhat = 2*(exp_vals >=0) -1

    return cost, accuracy_score(y, yhat, normalize=True)
x = np.random.rand(n_features, n_samples)
y = np.random.choice([-1, 1], size=n_samples)
weights = np.random.rand(n_params, requires_grad = True)
optimizer = qml.GradientDescentOptimizer(stepsize=0.1)

for k in range(epochs):

    optimizer.step(cost_acc, params = weights, data = x, y = y)

/local/zchandani/pennylane/qnns/lib/python3.8/site-packages/pennylane/_grad.py:107: UserWarning: Attempted to differentiate a function with no trainable parameters. If this is unintended, please add trainable parameters via the 'requires_grad' attribute or 'argnum' keyword.
  warnings.warn(

I modified the code a little and added in a lambda function


cost = lambda params: cost_acc(params, data = x, y = y)

optimizer = qml.GradientDescentOptimizer(stepsize=0.1)

for k in range(epochs):

    a, b = optimizer.step_and_cost(cost, weights)


---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
Cell In[5], line 7
      3 optimizer = qml.GradientDescentOptimizer(stepsize=0.1)
      5 for k in range(epochs):
----> 7     a, b = optimizer.step_and_cost(cost, weights)

File /local/zchandani/pennylane/qnns/lib/python3.8/site-packages/pennylane/optimize/gradient_descent.py:59, in GradientDescentOptimizer.step_and_cost(self, objective_fn, grad_fn, *args, **kwargs)
     39 def step_and_cost(self, objective_fn, *args, grad_fn=None, **kwargs):
     40     """Update trainable arguments with one step of the optimizer and return the corresponding
     41     objective function value prior to the step.
     42 
   (...)
     56         If single arg is provided, list [array] is replaced by array.
     57     """
---> 59     g, forward = self.compute_grad(objective_fn, args, kwargs, grad_fn=grad_fn)
     60     new_args = self.apply_grad(g, args)
     62     if forward is None:

File /local/zchandani/pennylane/qnns/lib/python3.8/site-packages/pennylane/optimize/gradient_descent.py:117, in GradientDescentOptimizer.compute_grad(objective_fn, args, kwargs, grad_fn)
     99 r"""Compute gradient of the objective function at the given point and return it along with
    100 the objective function forward pass (if available).
    101 
   (...)
    114     will not be evaluted and instead ``None`` will be returned.
...
    464         )
    465 else:
    466     candidate = op_batch_size

ValueError: The batch sizes of the quantum script operations do not match, they include 5 and 1.

The issue now is that the optimizer does not broadcast the dimension of weights to apply it to all the training data x.

Any ideas how this is fixed? I have tried to look at demos on your website but couldn’t find any that match this workflow.

Hey @Zohim_Chandani1! Welcome back to the forum :smile:.

The mean_squared_error function (source code here) in sklearn does call numpy, but it’s not PennyLane’s wrapped version of numpy. Therefore, it doesn’t understand how to differentiate it :sweat_smile:.

Also, your cost function is returning two different quantities. Which one do you want to differentiate? Your code works for me if I change your cost function to the following:

def cost_acc(params, data, y):
    exp_vals = qnn(data, params)
    cost = np.square(np.subtract(y, exp_vals)).mean() # equivalent to mean_squared_error in sklearn.metrics
    return cost

Oops! Also make sure to update your weights:

data = np.random.rand(n_features, n_samples, requires_grad=False)
y = np.random.choice([-1, 1], size=n_samples, requires_grad=False)
weights = np.random.rand(n_params, requires_grad=True)

for k in range(epochs):
    weights, _, _ = optimizer.step(cost_acc, weights, x, y)