Quantum Machine Learning in Feature Hilbert Spaces

I am trying to implement the variational quantum classifier(explicit approach- Figs. 4 and 5) as described in the paper using Pennylane. The circuit architecture is provided in the supplementary information.

I have coded everything and it seems okay to me. However, it doesn’t work and I get the UserWarning: Output seems independent of input.

Can there be a possible error in encoding the inputs and circuit parameters?
@Maria_Schuld

Code snippet:

dev_fock = qml.device("strawberryfields.fock", wires=2, cutoff_dim=3)

def layer(W):
    qml.Beamsplitter(W[0,0], W[1,0], wires=[0,1])
    qml.Displacement(1.0,W[0,1],wires=0)
    qml.Displacement(1.0,W[1,1],wires=1)
    qml.QuadraticPhase(W[0,2],wires=0)
    qml.QuadraticPhase(W[1,2],wires=1)
    qml.CubicPhase(W[0,3],wires=0)
    qml.CubicPhase(W[1,3],wires=1)

@qml.qnode(dev_fock)
def circuit_p0(weights, x):
    qml.templates.embeddings.SqueezingEmbedding(x,wires=[0,1],method='phase',c=1.5)
    for W in weights:
        layer(W)
    return expval(qml.FockStateProjector(np.array([2,0]), wires=[0,1]))

@qml.qnode(dev_fock)
def circuit_p1(weights, x):
    qml.templates.embeddings.SqueezingEmbedding(x,wires=[0,1],method='phase',c=1.5)
    for W in weights:
        layer(W)
    return expval(qml.FockStateProjector(np.array([0,2]), wires=[0,1]))

def variational_classifier(var, x):
    weights=var[0]
    bias=var[1]
    p0= circuit_p0(weights,x)+bias
    p1= circuit_p1(weights,x)+bias
    prob_y0= p0/(p0+p1)
    prob_y1= p1/(p0+p1)
    if prob_y0> prob_y1:
        ans=-1
    else:
        ans=1
    return ans

def square_loss(labels, predictions):
    loss = 0
    for l, p in zip(labels, predictions):
        loss = loss + (l - p) ** 2
    loss = loss / len(labels)
    return loss

def accuracy(labels, predictions):
    acc = 0
    for l, p in zip(labels, predictions):
        if abs(l - p) < 1e-5:
            acc = acc + 1
    acc = acc / len(labels)
    return acc

def cost(var, X, Y):
    predictions = [variational_classifier(var, x=x) for x in X]
    return square_loss(Y, predictions)

X, Y= sklearn.datasets.make_moons(n_samples=200, shuffle=True, noise=0.1, random_state=None)
Y = Y * 2 - np.ones(len(Y))  # shift label from {0, 1} to {-1, 1}

np.random.seed(0)
num_qubits = 2
num_layers = 4
var_init = (np.random.randn(num_layers, num_qubits,4),0.0)

batch_size = 5
opt = AdamOptimizer(0.01, beta1=0.9, beta2=0.99)
var = var_init
sqloss=np.zeros(50)
for it in range(50):
    batch_index = np.random.randint(0, len(X), (batch_size,))
    X_batch = X[batch_index]
    Y_batch = Y[batch_index]
    var = opt.step(lambda v: cost(v, X_batch, Y_batch), var)
    predictions = [variational_classifier(var, x) for x in X]
    acc = accuracy(Y, predictions)
    sqloss[it]= square_loss(Y,predictions)
    print("Iter: {:5d} | Cost: {:0.7f} | Accuracy: {:0.7f} ".format(it + 1, cost(var, X, Y), acc))

Dear Vineesha,

Welcome to the forum and thanks for your question!

When I run your code I get a different error from yours.

I wonder if you could please shorten your code to a minimum working example which only contains the lines necessary to reproduce the error? That would make it a lot easier for me to try and help :slight_smile:

Dear Maria Schuld,

Thank you for your reply.
I was able to fix the errors.
However, the training is not very good. I get the results as shown below for 50 training examples generated using X, Y= sklearn.datasets.make_circles(n_samples=50, shuffle=True, noise=0.1, random_state=None)

I use Adam optimizer with learning rate 0.005 with square-loss cost function and batch-size=5.

How can I improve my training performance?

That is the central research question when doing machine learning :slight_smile:

You will have to experiment with different optimization settings and models, and if none works you might want think about theoretical limitations that prevent your model from training. It’s hard to give you a general answer here…