Constant Loss and Accuracy in QCNN

Hello! I am trying to create a QCNN defined with pennylane to classify quantum states. However, I have some problems during the training: I have never really had to do with ML, so I would like some advice. In fact, during the training phase my loss and accuracy are constant.
I have already checked that my quantum circuit does depend on the parameters.
I am sharing with you the code about it. I am not sure that the code about data generation and the qcnn can be helpful here, so I am not posting them.

x_train = np.array(x_train, requires_grad=False)
y_train = np.array(y_train, requires_grad=False)

# Initialize the parameters
np.random.seed(42)
params = np.random.rand(40, requires_grad=True)
iterations = 50

# Cost function
def cost(parameters, x, y):
    predictions = [np.sign(qcnn(parameters, xi)) for xi in x]
    return np.mean((predictions - y) ** 2)

# Accuracy
def accuracy(labels, predictions):
    loss = 0
    for l, p in zip(labels, predictions):
        if abs(l - p) < 1e-5:
            loss = loss + 1
    loss = loss / len(labels)
    return loss

# Parameters optimization
opt = qml.NesterovMomentumOptimizer(0.5)

for epoch in tqdm(range(iterations)):
    params, _, _ = opt.step(cost, params, x_train, y_train)

    if epoch % 10 == 0:
        # Compute accuracy
        predictions = [np.sign(qcnn(params, xi)) for xi in x_train]
        acc = accuracy(y_train, predictions)
        # Compute loss
        loss = cost(params, x_train, y_train)
        print(f"Epoch {epoch}, Loss: {loss}, Accuracy: {acc}")

I don’t have any error. But the code prints out:

  2%|▏         | 1/50 [00:02<02:03,  2.53s/it]
Epoch 0, Loss: 2.6666666666666665, Accuracy: 0.3333333333333333
 22%|██▏       | 11/50 [00:10<00:41,  1.06s/it]
Epoch 10, Loss: 2.6666666666666665, Accuracy: 0.3333333333333333
 42%|████▏     | 21/50 [00:18<00:27,  1.04it/s]
Epoch 20, Loss: 2.6666666666666665, Accuracy: 0.3333333333333333
 62%|██████▏   | 31/50 [00:26<00:19,  1.00s/it]
Epoch 30, Loss: 2.6666666666666665, Accuracy: 0.3333333333333333
 82%|████████▏ | 41/50 [00:34<00:09,  1.02s/it]
Epoch 40, Loss: 2.6666666666666665, Accuracy: 0.3333333333333333
100%|██████████| 50/50 [00:41<00:00,  1.21it/s]

Thank you.

I think I have solved it trying defining the cost function as indicated HERE, that is:

def square_loss(labels, predictions):
    loss = 0
    for l, p in zip(labels, predictions):
        loss = loss + (l - p) ** 2
    loss = loss / len(labels)
    return loss

def accuracy(labels, predictions):
    loss = 0
    for l, p in zip(labels, predictions):
        if abs(l - p) < 1e-5:
            loss = loss + 1
    loss = loss / len(labels)
    return loss

def variational_classifier(weights, x):
    return qcnn(weights, x) 

def cost(weights,  X, Y):
    predictions = [variational_classifier(weights,  x) for x in X]
    return square_loss(Y, predictions)

Can someone explain to me why the method I used before didn’t work?

Hey @kBoltzmann, welcome to the forum!

Can you show me what qcnn is?

1 Like

Hello! qcnn is a Quantum Convolutional Neural Network, inspired by a couple of articles (1, [2](https://Quantum phase detection generalization from marginal quantum neural network models)) I have read.
As I sad in a reply, I don’t have the problem of constant accuracy and cost function anymore. However, my model is not training so well anyway: the cost function is still too big and the accuracy doesn’t go further the 90%. I am not sure if it’s because of the dataset or something else.
Here is the code: I hope it’s clear.

@qml.qnode(dev)
def qcnn(theta, x):
    n_qubits = 11
    
    # Encoding
    qml.AmplitudeEmbedding(features=x, wires=range(11))

    # QCNN
    theta_counter = 0 # counts the number of parameters used

    # Free Rotations
    for i in range(n_qubits):
        qml.RX(theta[i], wires=i) 
        theta_counter += 1

    # First Convolutional Layer
    for i in range(n_qubits//2): 
        qml.IsingYY(theta[theta_counter], wires=[2*i, 2*i+1]) 
        theta_counter += 1
    for i in range(n_qubits - n_qubits//2 - 1):
        qml.IsingYY(theta[theta_counter], wires=[2*i+1, 2*i+2])       
        theta_counter += 1

    # Free Rotations
    for i in range(n_qubits):
        qml.RX(theta[theta_counter], wires=i)
        theta_counter += 1
    
    # First Pooling Layer (CY + measurement + RZ) 
    for i in range(n_qubits//2):
        qml.CY(wires=[2*i+1, 2*i])
        m = qml.measure(2*i+1)
        qml.cond(m, qml.RZ)(theta[theta_counter], 2*i)
    theta_counter += 1
        
    # Second Convolutional Layer
    for i in range(n_qubits//4+1):
        qml.IsingYY(theta[theta_counter], wires=[4*i, 4*i+2])
        theta_counter += 1
    for i in range(n_qubits//2 - n_qubits//4 - 1):
        qml.IsingYY(theta[theta_counter], wires=[4*i+2, 4*i+4]) 
        theta_counter += 1    
    for i in range(n_qubits//2):
        qml.RX(theta[theta_counter], wires=2*i) # Free Rotations
        theta_counter += 1
    
    # Second Pooling (CY + measurement + RZ) 
    for i in range(n_qubits//4+1):
        qml.CY(wires=[4*i+2, 4*i])
        m = qml.measure(4*i+2)
        qml.cond(m, qml.RZ)(theta[theta_counter], 4*i)
    theta_counter += 1
   
    # Fully connected layer (3 qubits are left)
    j = theta_counter
    qml.IsingYY(theta[j], wires=[0, 4])
    qml.IsingXX(theta[j+1], wires=[0, 4])
    qml.IsingZZ(theta[j+2], wires=[0, 4])
    qml.IsingYY(theta[j+3], wires=[4, 8])
    qml.IsingXX(theta[j+4], wires=[4, 8])
    qml.IsingZZ(theta[j+5], wires=[4, 8])
    qml.IsingYY(theta[j+6], wires=[8, 0])
    qml.IsingXX(theta[j+7], wires=[8, 0])
    qml.IsingZZ(theta[j+8], wires=[8, 0])
    theta_counter += 9
    #print('Number of parameters: %d' % theta_counter)
    # In total one needs: 53 parameters for 11 qubits

    return qml.expval((qml.PauliZ(0))) 

Thanks! Tough to say why your method before wasn’t working. If you post a complete working example that reproduces the cost not changing then we can dig into that :slight_smile:

However, my model is not training so well anyway: the cost function is still too big and the accuracy doesn’t go further the 90%

Tough to say what’s causing this too :sweat_smile:! It could be a multitude of things, including your circuit ansatz, choice of optimizer, learning rate / step size, etc. I’d maybe try a few different optimizers before you start tinkering with your circuit ansatz.

Hope that helps!

1 Like

I have tried NesterovMomentumOptimizer and GradientDescentOptimizer. Moreover, I’ve tried many learning rates, but apparently the only one working (despite just a little) is 0.5. I am also trying using both the square loss and the binary cross entropy as cost function. Since my dataset has dimension 40, I am using a batch size of 5. I have tried also 10, but it doesn’t work better. In the end I managed to have an accuracy of 0.975 on a dataset of 40. However, the cost function remains high (square loss ≈ 0.2). 100 epochs or more.

100 epochs isn’t all that much. You can try to run it for longer and see what happens. Also, data points with a dimension of 40 is quite large! Maybe your network should be bigger… not sure. Tough to say! Keep trying and tinkering around with things :slight_smile:

1 Like