TypeError: QNode must include an argument with name inputs for inputting data

Hello,

Version 1 does not throw this error:

# Define the Quanvolutional Neural Network
class QuanvolutionalNeuralNetwork(nn.Module):
    def __init__(self, n_qubits, n_layers, circuit, dev_train, gpu_device, patch_size, img_size_single, num_classes):
        super().__init__()
        self.n_qubits=n_qubits
        self.patch_size=patch_size
        self.img_size_single=img_size_single
        self.n_layers=n_layers
        self.device=gpu_device 
        self.circuit=circuit 
        self.num_classes=num_classes
        self.dev_train=dev_train
        
        
        self.fc1 = nn.Linear(self.n_qubits, self.num_classes)
        self.q_params = nn.Parameter(torch.Tensor(self.n_qubits, self.n_qubits))
        self.lr1 = nn.LeakyReLU(0.1)
        nn.init.xavier_uniform_(self.q_params)
        self.pqc = qml.QNode(circuit, self.dev_train, interface='torch')
        Q_Plot(self.pqc,self.n_qubits,self.n_layers)
...

In version two, I just added this line: self.ql1 = qml.qnn.TorchLayer(self.pqc, weight_shapes) so that the code is now:

# Define the Quanvolutional Neural Network
class QuanvolutionalNeuralNetwork(nn.Module):
    def __init__(self, n_qubits, n_layers, circuit, dev_train, gpu_device, patch_size, img_size_single, num_classes):
        super().__init__()
        self.n_qubits=n_qubits
        self.patch_size=patch_size
        self.img_size_single=img_size_single
        self.n_layers=n_layers
        self.device=gpu_device 
        self.circuit=circuit 
        self.num_classes=num_classes
        self.dev_train=dev_train
        
        
        self.fc1 = nn.Linear(self.n_qubits, self.num_classes)
        self.q_params = nn.Parameter(torch.Tensor(self.n_qubits, self.n_qubits))
        self.lr1 = nn.LeakyReLU(0.1)
        nn.init.xavier_uniform_(self.q_params)
        self.pqc = qml.QNode(circuit, self.dev_train, interface='torch')
        **weight_shapes = {"weights": (n_qubits, n_qubits)}**
**        self.ql1 = qml.qnn.TorchLayer(self.pqc, weight_shapes)**
        Q_Plot(self.pqc,self.n_qubits,self.n_layers)

The exception is:
TypeError: QNode must include an argument with name inputs for inputting data

Two questions:
I have been training until now without explicitly turning the Qnode into TorchLayer, but I still believe that the parameters are trainable- am I correct in assuming so?

Why does this line throw an error?

Thanks.

Hello @Solomon! Thanks for sharing your code. I’ll take a look and I’ll come back soon with an answer.

Hello @Solomon !

I have an answer for your second question. Would you mind sharing a bit more of your code? You can check out this video with some tips to how to write a post with code so we can get all the information needed to help you. :slight_smile:

I am asking because the QNode arguments for qnn.TorchLayer must satisfy some conditions: the signature of the QNode should contain an inputs named argument for input data, with all other arguments to be treated as internal weights. You can find more information about the other requiriments here.

As the error suggests, the QNode self.pdc does not have an argument named inputs.

Thank you so much for your suggestion. I amended the code to reflect the change.

def Q_encoding_circuit_A(inputs, q_weights, n_qubits, q_depth):
   ...
    exp_vals = [qml.expval(qml.PauliZ(i)) for i in range(n_qubits)]
    return exp_vals

And this is the QNN:

self.pqc = qml.QNode(circuit, self.dev_train, interface='torch')
        weight_shapes = {"weights": (n_qubits, n_qubits)}
        self.ql1 = qml.qnn.TorchLayer(self.pqc, weight_shapes)

However, the API requires indicating:
ValueError: Must specify a shape for every non-input parameter in the QNode

My non input parameters are q_weights, n_qubits, q_depth, I need to indicate the shape of the shape parameter e.g. q_weights?! Not making much sense.

Is this the correct declaration?
weight_shapes = {"q_weights": (n_qubits, n_qubits),'n_qubits':(1),'q_depth':(1)}

Thanks

Also, If I am using TorchLayer, do I still need to allocate nn.parameters? Or are they created automatically?
self.q_params = nn.Parameter(0.001 * torch.randn(self.n_qubits,self.n_qubits))

Thanks

Hello @Solomon ! Thank you for being patient!

Your function Q_encoding_circuit_A seems to fulfill the requirements. :slight_smile:

And yes, all the other non-input parameters are treated as weights. As the error suggests, you should specify all of them in weight_shapes and the way you declared seems fine. I hope it works now! :slight_smile:

Also, If I am using TorchLayer, do I still need to allocate nn.parameters? Or are they created automatically?
self.q_params = nn.Parameter(0.001 * torch.randn(self.n_qubits,self.n_qubits))

Sorry, I didn’t understand your question. Do you want to know how the weights are initialized? If so, you can specify a method using the init_method argument in qnn.TorchLayer. It can be a torch.nn.init function for initializing all QNode weights or a dictionary specifying the callable/value used for each weight. If not specified, weights are randomly initialized using the uniform distribution over [0,2\pi]. Does it answer your doubt?

Hello and thanks for your response. Here, i am trying to understand how to automatically determine the shape of the weights parameter of the TorchLayer class:

# This notebooke shows how to allocate the weights parameter of qml.qnn.TorchLayer. 
# Failing to do so correctly may throw the notorious ValueError: Must specify a shape for every non-input parameter in the QNode.

%reset -f 
import numpy as np
import pennylane as qml
import torch
import sklearn.datasets
import sklearn.metrics
from pennylane.ops.qubit import CNOT
import matplotlib.pyplot as plt
from qugel.qgates import * 

n_qubits = 2
# Generate random parameters for the quantum circuit
inp_arr= torch.tensor(np.pi * np.random.randn(n_qubits))
n_layers=2
w_dim=3

dev4 = qml.device("default.qubit", wires=n_qubits)

@qml.qnode(dev4, interface='torch')
def qnode001(inputs, weights):
    # print("Shapes: inputs={}, params_arr={}, q_bits={}, q_depth={}".format(inputs.shape, weights.shape, n_qubits, n_layers))
    qml.templates.AmplitudeEmbedding(inputs, wires=[i for i in range(n_qubits)], normalize=True, pad_with=4)
    qml.templates.StronglyEntanglingLayers(weights, wires=[i for i in range(n_qubits)], ranges=None, imprimitive=CNOT)
    
    return qml.expval(qml.PauliZ(0)), qml.expval(qml.PauliZ(1))

# @qml.qnode(dev4)
# def CONVCircuit(inputs, weights):    
#     """
#     Args:
#         inputs: the image to encode.
#         params: phi_pqc_array, the parameters of the PQC
#         q_bits: number of qubits
#         q_depth: number of layers in the quantum circuit
#     """
#     wires=n_qubits
#     num_rep = int(inputs.shape[0] / n_qubits)    
#     # print("Shapes: inputs={}, params_arr={}, q_bits={}, q_depth={}".format(inputs.shape, weights.shape, n_qubits, n_layers))
#     # qml.AmplitudeEmbedding(features=inputs, wires=range(0, q_bits), normalize=True, pad_with=params_arr.flatten().shape[0])    
#     Q_encoding_block(inputs, n_qubits)    
#     # Q_quanvol_block_A(weights, n_qubits, n_layers)
#     exp_vals = [qml.expval(qml.PauliZ(position)) for position in range(n_qubits)]
#     # print("Patch:{}, Measure:{}, Reps:{}".format(inputs.shape, len(exp_vals), num_rep))
#     return exp_vals

class QNN(torch.nn.Module):
    def __init__(self, circ):
        super(QNN, self).__init__()
        weight_shapes = {"weights": (1, n_qubits, 3)}
        # print (weight_shapes.get("weights"))
        self.pqc=circ
        self.qlayer = qml.qnn.TorchLayer(self.pqc, weight_shapes)
        # Draw the quantum circuit
        fig, ax = qml.draw_mpl(self.pqc, expansion_strategy='device')(inp_arr, torch.zeros(n_layers, n_qubits, w_dim))
        plt.show()
        fig.show()
        self.clayer2 = torch.nn.Linear(2, 2)

    def forward(self, x):
        # print (x.shape)
        output = self.qlayer(x)
        output = self.clayer2(output)

        return output

def print_network(model):
    """Print out the network information."""
    num_params = 0
    for p in model.parameters():
        num_params += p.numel()
    # print(model)    
    print("The number of parameters: {}".format(num_params))


def Q_count_parameters(qnn):
    # print(dict(qnn.named_parameters()))
    num_params = 0
    for name, param in qnn.named_parameters():
        param.requires_grad = True
        print(name, param.data)
    print (qnn)            
    return sum(p.numel() for p in qnn.parameters() if p.requires_grad)


model = QNN(qnode001)
Q_count_parameters(model)



# Data
samples = 500
x, y = sklearn.datasets.make_moons(samples)
y_hot = np.zeros((samples, 2))
y_hot[np.arange(samples), y] = 1

X = torch.tensor(x).float()
Y = torch.tensor(y_hot).float()

# Validation data
val_samples = 100
val_x, val_y = sklearn.datasets.make_moons(val_samples)
val_y_hot = np.zeros((val_samples, 2))
val_y_hot[np.arange(val_samples), val_y] = 1

val_X = torch.tensor(val_x).float()
val_Y = torch.tensor(val_y_hot).float()

# Optimizer and loss function
# opt = torch.optim.Adam(model.parameters(), lr=0.3)
opt = torch.optim.Adagrad(model.parameters(), lr=0.3)
loss = torch.nn.L1Loss()

# Training parameters
epochs = 15
batch_size =64 
batches = samples // batch_size

# Data loader
data_loader = torch.utils.data.DataLoader(list(zip(X, Y)), batch_size=batch_size, shuffle=True, drop_last=True)

# Lists for storing losses and accuracies
train_losses = []
val_losses = []
train_accuracies = []
val_accuracies = []

from tqdm import tqdm 

for epoch in tqdm(range(epochs)):
    running_loss = 0
    correct = 0
    total = 0

    for x, y in data_loader:
        opt.zero_grad()
        outputs = model(x)
        loss_evaluated = loss(outputs, y)
        loss_evaluated.backward()
        opt.step()
        running_loss += loss_evaluated.item()
        _, predicted = torch.max(outputs.data, 1)
        total += y.size(0)
        correct += (predicted == y.argmax(dim=1)).sum().item()
    avg_loss = running_loss / batches
    accuracy = 100 * correct / total

    # Validation
    val_outputs = model(val_X)
    val_outputs = val_outputs.view(val_samples, -1)  # Reshape val_outputs
    val_loss = loss(val_outputs, val_Y)
    val_predicted = torch.max(val_outputs.data, 1)[1]
    val_accuracy = 100 * (val_predicted == val_Y.argmax(dim=1)).sum().item() / val_samples

    # Store losses and accuracies
    train_losses.append(avg_loss)
    val_losses.append(val_loss.item())
    train_accuracies.append(accuracy)
    val_accuracies.append(val_accuracy)

    print("Epoch {}: Loss = {:.4f}, Accuracy = {:.2f}%, Val Loss = {:.4f}, Val Accuracy = {:.2f}%".format(
        epoch + 1, avg_loss, accuracy, val_loss.item(), val_accuracy))

# Calculate confusion matrix
val_predicted_labels = val_predicted.numpy()
val_true_labels = val_Y.argmax(dim=1).numpy()
confusion_matrix = sklearn.metrics.confusion_matrix(val_true_labels, val_predicted_labels, normalize='true')

# Plot confusion matrix
plt.figure(figsize=(6, 4))
plt.imshow(confusion_matrix, cmap='Blues')
plt.title('Confusion Matrix')
plt.colorbar()
plt.xlabel('Predicted Labels')
plt.ylabel('True Labels')
plt.xticks([0, 1])
plt.yticks([0, 1])

# Display percentages inside the matrix
thresh = confusion_matrix.max() / 2
for i in range(2):
    for j in range(2):
        plt.text(j, i, f'{confusion_matrix[i, j]*100:.2f}%', ha="center", va="center", color="white" if confusion_matrix[i, j] > thresh else "black")

plt.show()

# Plotting the losses and accuracies
epochs_range = range(1, epochs + 1)

plt.figure(figsize=(10, 4))

plt.subplot(1, 2, 1)
plt.plot(epochs_range, train_losses, label='Training Loss')
plt.plot(epochs_range, val_losses, label='Validation Loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()

plt.subplot(1, 2, 2)
plt.plot(epochs_range, train_accuracies, label='Training Accuracy')
plt.plot(epochs_range, val_accuracies, label='Validation Accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy (%)')
plt.legend()

plt.tight_layout()
plt.show()

This is my output:

(1, 2, 2)
qlayer.weights tensor([[[6.0415, 0.5459],
         [2.0931, 4.1954]]])
clayer2.weight tensor([[ 0.7059, -0.5791],
        [-0.2732,  0.4173]])
clayer2.bias tensor([-0.5260,  0.1105])
QNN(
  (qlayer): <Quantum Torch Layer: func=qnode001>
  (clayer2): Linear(in_features=2, out_features=2, bias=True)
)
Shapes: inputs=torch.Size([2]), params_arr=torch.Size([1, 2, 2]), q_bits=2, q_depth=2

But there is an error:

ValueError: Weights tensor must have third dimension of length 3; got 2

I know where it happnes:

weight_shapes = {"weights": (1, n_qubits, n_layers)}

If I change n_layers to 3, the error disappears. Where did the number 3 came from? Can you explain how I could have known that in advance?

The correctly running notebook is here: Qugel/qugel_007_simple_qnn.ipynb at master · BoltzmannEntropy/Qugel · GitHub

Thanks,

Hello @Solomon !

Which function call returned the ValueError?

I see that you’re using the template StronglyEntanglingLayers to build up your QNode function qnode001.

According to the documentation, the parameter weights should be a tensor of shape (L, M, 3), where L is the number of layers, M the number of wires and the number 3 in the end is because of the single quibit rotation operations that composes the circuit. If you take a look at this picture I think it will become clear:

A similar example is also presented here.

Finnaly, if you want to create, for example, a randomly initialized weight tensors you can do the following:

shape = qml.StronglyEntanglingLayers.shape(n_layers=2, n_wires=2)
weights = np.random.random(size=shape)

Does it helps? :slight_smile:

Thanks Ludmila. You stated “the number 3 in the end is because of the single qubit rotation operations that composes the circuit”, I am afraid I still do not get the relationship or how 3 was determined.

Can anyone elaborate on this? Why for the built-in StronglyEntanglingLayers I have to specify 3 as the third dimension and how do I determine this dimension for my own entangling layer. Thanks.

Hello @Solomon !

The basic idea is, on each layer, you will need to apply rotation operations on each wire. Therefore, the tensor weights must contain a dimension that is always equal to 3 because a rotation operation depends on 3 parameters.

At this point, if you are still in doubt about details of StronglyEntanglingLayers, I strongly recommend taking a look at the documentation and perhaps the paper which is based. After that, if you still having doubts, we can discuss a bit more. :slight_smile:

And thanks for sending us your question! Have a nice day! :slight_smile:

1 Like