How to design Variational circuit and feature map for Quantum Classical Learning

Dear All,

I am trying to adapt the results presented in Quantum Circuit Learning and Quantum Extremal Learning to find the maximal value of

f(x)=−sin(10x) + 3*cos(18x) − 8*(x − 1/2)**2 + 5/4

I tried to do it with the following code but it seems that the expressivity of the feature map and/or variational layer is not enough to capture the target function.

#Import relevant dependencies

import pennylane as qml
from pennylane import numpy as np
import matplotlib.pyplot as plt

#define the function to fit:

def target_function(x):
    """function to be fitted by the QML model
    Arguments
    ---------
    x: float, the input variable.
    Returns
    -------
    y=f(x): float, the value of the target function at the input point x.
    
    """
    return -np.sin(10*x)+3*np.cos(18*x)-8*(x-1/2)**2+5/4

#define the number of qubits and layers of the variational QC.

n_wires=6
n_layers=4

#define the quantum device
dev = qml.device("default.qubit", wires=n_wires)

#define the feature_map
def feature_map(x):
    """function creating the quantum circuit layer for the feature map
    here we use the feature map suggested in the tips with phi=arcsin(x).
    Arguments
    ---------
    wires: int, number of qbits.
    x: float, input value to be encoded in the feature map.
    """
    qml.broadcast(qml.RX, wires=range(n_wires),pattern='single',parameters=np.arcsin(x)*np.arange())
                  

@qml.qnode(dev)
#Create the QCL circuit. Here we use the strongly entangling layer as the variational layer and measure the magnetization
#<Z> on qubit 0.
                  
def circuit(x,weights):
    """function creating the complete quantum circuit for QML 
    and measuring the output
    Arguments
    ---------
    x: float,input 
    circuit_params: sequence, list containing the angles for all rotations in the variational layer.
    the first angles are the 
    n_wires: int, number of qbits for the variational quantum circuit
    Returns
    -------
    y: float, expectation of the observable O chosen.
    """
    feature_map(x)
    
    qml.StronglyEntanglingLayers(weights, wires=range(n_wires))
    
    return qml.expval(qml.PauliZ(0))
#Define a variational model to add a bias and multiplicative coeff to the output of our QCL circuit.
def variational_model(params,x):
    weights=params
    return (circuit(x,weights))
#Define the loss function:
def square_loss(labels, predictions):
    loss = 0
    for l, p in zip(labels, predictions):
        loss = loss + (l - p) ** 2

    loss = loss / len(labels)
    return loss
#Define the cost function:

def cost(params, inputs, labels):
    predictions = [variational_model(params,x) for x in inputs]
    return square_loss(labels, predictions)
    
#Generate training and test sets: 
xs=np.linspace(0,1,100)
x_train=xs[0::2]
x_test=xs[1::2]
y_train=np.vectorize(target_function)(x_train)
y_test=np.vectorize(target_function)(x_test)
                  
#Get the desired shape of the weights 
shape= qml.StronglyEntanglingLayers.shape(n_layers=n_layers, n_wires=n_wires)
print(shape)
                  
#initialize variational parameters
                  
var_init = np.random.uniform(high=2 * np.pi, size=(n_layers, n_wires, 3),requires_grad=True)
opt =qml.AdamOptimizer(stepsize=0.05)
batch_size = 10
num_train=50
# train the variational classifier
var = var_init
cost_history=[]
for it in range(50):

    # Update the weights by one optimizer step
    #batch_index = np.random.randint(0, num_train, (batch_size,))
    #x_train_batch = x_train[batch_index]
   # y_train_batch = y_train[batch_index]
    var = opt.step(lambda v: cost(v, x_train, y_train), var)

    # Compute predictions on train and validation set
    predictions_train = [variational_model(var,x) for x in x_train]
    
    # Compute accuracy on train and validation set
    loss_train = square_loss(y_train, predictions_train)
    cost_history.append(loss_train)
plt.plot(cost_history)

Also, I would like to add a multiplicative parameter ‘a’ and bias ‘bias’ to my variational model. To that end I tried to implement the variational_model function but I do not know how to pass the arguments to that function such that the gradient optimizer will recognize a and bias as parameters that have to be optimized. I tried to specify them as

numpy.array(float,requires_grad=True)

but it did not worked.

If you have any advices, suggestions, they are more than welcome.
Thanks you for your time and attention.

Hello @DrViPer1995,

Some guides on function fitting, expressivity, and variational layers that might help:

1 Like

Hey @DrViPer1995! Welcome to the forum :rocket:

… it seems that the expressivity of the feature map and/or variational layer is not enough to capture the target function.

This is a question we get quite often, and unfortunately the answer isn’t very satisfying but it is what it is :sweat_smile:. There are many things that go into making a machine learning model work well for a specific task, including the choice of optimizer, the choice of hyperparameters — learning rate, step size, batch size, etc — the cost function, the model architecture itself, of course, and many more. So, it’s extremely tough to tell why your model isn’t performing as well as you feel it should. It’s a tedious task to tweak all of those things and find the right combination.

Also, I would like to add a multiplicative parameter ‘a’ and bias ‘bias’ to my variational model

I’m not sure that I understand what you mean. Could you explain what you mean with some math?

Hey @DrViPer1995, just wanted to check in and see if you’ve fixed your issue? If / when your issue is fixed, we have a new PennyLane survey . Let us know your thoughts about PennyLane so that we can keep bringing you amazing features :sparkles:.