I was inspired by the discussion here to try to implement QNG “by hand” on my own circuit, but ran into some problems.
I want to pass non-differentiable arguments to the main circuit. Other comments I have read have stated that this should be done using something like
input_state=None, which I have done here. However, I am having difficulty correctly calling
metric_tensor() to be able to compute the gradient.
The current version of the code says that
input_state is an unexpected keyword argument, and leaving it out doesn’t work either.
import pennylane as qml from pennylane.templates import AmplitudeEmbedding from pennylane import numpy as np n_qubits = 2 segments = 4 weights = [np.random.uniform(-np.pi,np.pi) for _ in range((segments+1)*5)] input_state = np.random.rand(5,2**n_qubits) targets = np.random.rand(5) dev = qml.device('default.qubit', wires=n_qubits, shots=5000, analytic=False) @qml.qnode(dev) def circuit(weights, input_state=None): '''Variational circuit''' AmplitudeEmbedding(input_state, wires=[j for j in range(n_qubits)], normalize=True) for i in range(segments): qml.RZ(weights[0 + 5*i], wires=1) qml.CNOT(wires=[0, 1]) qml.RY(weights[1 + 5*i], wires=0) qml.RY(weights[2 + 5*i], wires=1) qml.RZ(weights[3 + 5*i], wires=0) qml.RZ(weights[4 + 5*i], wires=1) return qml.expval(qml.PauliZ(0) @ qml.PauliZ(1)) def cost(weights, training_pairs, targets): '''Cost function''' outputs = np.array([circuit(weights, input_state=pair) for pair in training_pairs]) loss = np.mean((outputs - targets)**2) return loss quantum_grad = qml.grad(circuit) def cost_ng(weights, training_pairs, targets): '''Natural gradient of the cost function''' qnatgrad = np.empty((len(weights),)) for idx, pair in enumerate(training_pairs): outputs = np.array([circuit(weights, input_state=pair) for pair in training_pairs]) # compute gradient for each input pair with respect to `weights` qgrad = quantum_grad(weights, input_state=pair) # compute the metric tensor for each input pair with respect to `weights` g = circuit.metric_tensor([weights], input_state=pair)[:len(qgrad), :len(qgrad)] # compute pseudo-inverse of metric tensor by solving linear algebra problem qnatgrad[idx] = np.linalg.solve(g, qgrad) # Take the tensordot between the natural gradient and the loss loss_ng = np.tensordot(outputs - targets, qnatgrad, axes=1) / len(training_pairs) return loss_ng
I also don’t know if the final couple of lines of
cost_ng are working correctly. The size and shape problems are tripping me up and it’s hard to test them without
Any help is appreciated.