Hi Pennylane-Community!
I recently implemented a circuit for a Quantum Autoencoder.
Using a classical optimizer like Adam, everything works great, but I would like to use Quantum Natual Gradient Descent and I am running into some problems trying to implement that.
My circuit is a QNode, which takes the trainable parameters and some input data as arguments and return an expectation value:
@qml.qnode(dev1)
def circuit(params, data):
// data encoding and parametrized circuit ...
return qml.expval(qml.PauliZ(TOTAL_QBITS-1))
To train the model I defined a cost function for a single data sample:
def cost(params, single_sample):
return (1 - circuit(params, single_sample)) ** 2
Now I would like to iterate over all training samples and optimizes the cost with the QNGoptimizer. All the examples I found had the data for the QNode hard coded into the circuit but of course I want to optimize for a large dataset and I can’t seem to make the QNGoptimizer work with a QNode with two arguments.
opt = qml.QNGOptimizer(learning_rate)
for it in range(epochs):
for j, sample in enumerate(x_train):
metric_fn = qml.metric_tensor(circuit, approx="block-diag")
params, _ = opt.step(cost, params, sample, metric_tensor_fn=metric_fn)
loss = cost(params)
print(f"Epoch: {it} | Loss: {loss} |")
I checked the output of the metric function metric_fn for some arbitrary input of weights and a data sample and it returns a tuple of two metric tensors, one for the weights and one for the input data. This tuple can’t be used by the opt.step to optimize.
Any suggestions how I can fix this?
A second question would be If it is possible to extend this training to batches.
Thanks!
Greetings
Tom