 # Questions about Ansatz and Optimization in VQE

Hi,

I was trying to calculate the ground state energy of an arbitary molecule, and I have tried applying UCCSD and AdgradOptimization as ansatz and optimization respectively.

However, I want use different ansatz as comparison. Do you support UCCS and UCCD? Since I cheked the package pennylane.qml and found SingleExcitationUnitary and DoubleExcitationUnitary, yet I don’t know how to apply them.

Hi @ChrisW918, and welcome to the forum We have various built-in optimizers, but you can also use any scipy optimizers with the default autograd numpy interface.

If you chose a different interface like TensorFlow, you could use any of that interface’s optimizers.

It doesn’t look like we have UCCS or UCCD in our template repository yet. But we’re always looking for new templates to add. You can either create a feature request issue on the PennyLane repo or try and add it yourself.

Hi.

Here is a sample code.
This is not VQE but regression.
I believe that the implementation can be extended to VQE.

``````import pennylane as qml
from pennylane import numpy as np
from matplotlib import pyplot as plt
num_of_data = 256
X =  np.random.uniform(high=2 * np.pi, size=(num_of_data,1))
Y = np.sin(X[:,0])

########  parameters #############
n_qubits = 2 ## num. of qubit
n_layers = 2 # num of q_layers

dev = qml.device("default.qubit", wires=n_qubits, shots=None) # define a device
#dev = qml.device("lightning.qubit", wires=n_qubits, shots=None) # define a device
# Note: lightning.qubits is faster but "pip install PennyLane-Lightning" is required.

# Initial circuit parameters
var_init = np.random.uniform(high=2 * np.pi, size=(n_layers, n_qubits, 3))

# Definition of a device
#Note: If set the mutable option as False, we can get speeding up but the circuit stracture should be fixed.

# Data encoding and variational ansatz
def quantum_neural_net(var, x):
qml.templates.AngleEmbedding(x, wires=range(n_qubits))
qml.templates.StronglyEntanglingLayers(var, wires=range(n_qubits))
return qml.expval(qml.PauliZ(0))

def square_loss(desired, predictions):
loss = 0
for l, p in zip(desired, predictions):
loss = loss + (l - p) ** 2
loss = loss / len(desired)
return loss

def cost(var, features, desired):
preds = [quantum_neural_net(var, x) for x in features]
return square_loss(desired, preds)

import time

hist_cost = []
var = var_init
for it in range(10):
t1 = time.time()
var, _cost = opt.step_and_cost(lambda v: cost(v, X, Y), var)
t2 = time.time()
elapsed_time = t2-t1
print("Iter:"+str(it)+", cost="+str(_cost.numpy()))
print(f"Time：{elapsed_time}")
hist_cost.append(_cost)

plt.plot(10*np.log10(hist_cost),'o-')
Y_pred = [quantum_neural_net(var, x) for x in X]
plt.plot(X[:,0],Y_pred,'o')
``````

The code is almost same as gradient-based one.
I just reshaped the parameters, since scipy.optimize.minimize only accept a 1d array.

``````

# Initial circuit parameters
var_init = np.random.uniform(high=2 * np.pi, size=(n_layers*n_qubits*3)) # one-dimensional array

def quantum_neural_net(var, x):
var_3d_array = np.reshape(var,(n_layers,n_qubits,3))
qml.templates.AngleEmbedding(x, wires=range(n_qubits))
qml.templates.StronglyEntanglingLayers(var_3d_array, wires=range(n_qubits))
return qml.expval(qml.PauliZ(0))

from scipy.optimize import minimize
hist_cost = []
var = var_init

count = 0
def cbf(Xi):
global count
global hist_cost
count += 1
cost_now = cost(Xi,X,Y)
hist_cost.append(cost_now)
print('iter = '+str(count)+' | cost = '+str(cost_now))

result = minimize(fun=cost, x0=var_init, args=(X,Y) ,method='Nelder-Mead', callback=cbf, options={"maxiter":200})
t2 = time.time()
elapsed_time = t2-t1
print(f"Time：{elapsed_time}")
hist_cost.append(_cost)

plt.plot(10*np.log10(hist_cost[1:len(hist_cost)-1]),'o-')
``````

From my experience, as the number of parameter increases, gradient-based method becomes faster than non-gradient based one.
We can find the solution with fewer iterations.

For VQE, Quantum Natural Gradient would be effective in speeding up.
However, I haven’t tried it yet…

Thanks for sharing @Kuma-quant! Thanks @Kuma-quant! 