Hi.
Recently, I have implemented quantum variational learning with two different optimizers. Gradient-based (e.g. Adam) and Non-gradient based (e.g. Nelder-Mead).
Here is a sample code.
This is not VQE but regression.
I believe that the implementation can be extended to VQE.
First, here is Adam gradient-based optimization.
import pennylane as qml
from pennylane import numpy as np
from matplotlib import pyplot as plt
num_of_data = 256
X = np.random.uniform(high=2 * np.pi, size=(num_of_data,1))
Y = np.sin(X[:,0])
######## parameters #############
n_qubits = 2 ## num. of qubit
n_layers = 2 # num of q_layers
dev = qml.device("default.qubit", wires=n_qubits, shots=None) # define a device
#dev = qml.device("lightning.qubit", wires=n_qubits, shots=None) # define a device
# Note: lightning.qubits is faster but "pip install PennyLane-Lightning" is required.
# Initial circuit parameters
var_init = np.random.uniform(high=2 * np.pi, size=(n_layers, n_qubits, 3))
# Definition of a device
@qml.qnode(dev, diff_method='adjoint')
#@qml.qnode(dev, diff_method='adjoint', mutable=False)
#Note: If set the mutable option as False, we can get speeding up but the circuit stracture should be fixed.
# Data encoding and variational ansatz
def quantum_neural_net(var, x):
qml.templates.AngleEmbedding(x, wires=range(n_qubits))
qml.templates.StronglyEntanglingLayers(var, wires=range(n_qubits))
return qml.expval(qml.PauliZ(0))
def square_loss(desired, predictions):
loss = 0
for l, p in zip(desired, predictions):
loss = loss + (l - p) ** 2
loss = loss / len(desired)
return loss
def cost(var, features, desired):
preds = [quantum_neural_net(var, x) for x in features]
return square_loss(desired, preds)
opt = qml.AdamOptimizer(0.1)
import time
hist_cost = []
var = var_init
for it in range(10):
t1 = time.time()
var, _cost = opt.step_and_cost(lambda v: cost(v, X, Y), var)
t2 = time.time()
elapsed_time = t2-t1
print("Iter:"+str(it)+", cost="+str(_cost.numpy()))
print(f"Time:{elapsed_time}")
hist_cost.append(_cost)
plt.plot(10*np.log10(hist_cost),'o-')
Y_pred = [quantum_neural_net(var, x) for x in X]
plt.plot(X[:,0],Y_pred,'o')
Second, here is non-gradient based Nelder-Mead optimization.
The code is almost same as gradient-based one.
I just reshaped the parameters, since scipy.optimize.minimize only accept a 1d array.
# Initial circuit parameters
var_init = np.random.uniform(high=2 * np.pi, size=(n_layers*n_qubits*3)) # one-dimensional array
@qml.qnode(dev, diff_method='adjoint')
def quantum_neural_net(var, x):
var_3d_array = np.reshape(var,(n_layers,n_qubits,3))
qml.templates.AngleEmbedding(x, wires=range(n_qubits))
qml.templates.StronglyEntanglingLayers(var_3d_array, wires=range(n_qubits))
return qml.expval(qml.PauliZ(0))
from scipy.optimize import minimize
hist_cost = []
var = var_init
count = 0
def cbf(Xi):
global count
global hist_cost
count += 1
cost_now = cost(Xi,X,Y)
hist_cost.append(cost_now)
print('iter = '+str(count)+' | cost = '+str(cost_now))
result = minimize(fun=cost, x0=var_init, args=(X,Y) ,method='Nelder-Mead', callback=cbf, options={"maxiter":200})
t2 = time.time()
elapsed_time = t2-t1
print(f"Time:{elapsed_time}")
hist_cost.append(_cost)
plt.plot(10*np.log10(hist_cost[1:len(hist_cost)-1]),'o-')
From my experience, as the number of parameter increases, gradient-based method becomes faster than non-gradient based one.
We can find the solution with fewer iterations.
For VQE, Quantum Natural Gradient would be effective in speeding up.
However, I haven’t tried it yet…