I am trying to explore using different methods of optimisation with variational classifiers. I have built mine in the style shown here, where you have an optimisation step similar too:
for batch_index in tqdm(X_batches):
X_batch = X_train[batch_index] # grab out batches
y_batch = y_train[batch_index]
batch_cost = lambda v: cost(v, X_batch, y_batch)
theta = pennylane_opt.step(batch_cost, theta)
This, until now, has worked great! However, I have tried switching out the optimiser for QNGOptimizer and now get the error:
ValueError: The objective function must either be encoded as a single QNode or a VQECost object for the natural gradient to be automatically computed. Otherwise, metric_tensor_fn must be explicitly provided to the optimizer.
Though the message is very precise, I’m not sure how to fix my code to solve the problem. This isn’t a VQE problem, so it would seem I cant create a VQECost object and have a metric_tensor
method.
Therefore, do I need to restructure my classifier to use only 1 QNode? The example here uses 3 though (if number of QNodes = number of wires), while my code only has 2. Due to this, I’m not sure how to proceed this way either.
Or, have I misunderstood and I cannot use QNGOptimizer for classification problems?
If anyone has any advice on how to use QNGOptimizer with a variational classifier that would be appreciated, thanks!
Edit:
Added some code for clarity:
# quantum circuit
@qml.qnode(dev)
def circuit(weights, x=None):
AngleEmbedding(x, wires = range(n_qubits))
StronglyEntanglingLayers(weights, wires = range(n_qubits))
return qml.expval(qml.PauliZ(0))
# variational quantum classifier
def variational_classifier(theta, x=None):
weights = theta[0]
bias = theta[1]
return circuit(weights, x=x) + bias
def cost(theta, X, expectations):
e_predicted = np.array([variational_classifier(theta, x=x) for x in X])
loss = np.mean((e_predicted - expectations)**2)
return loss