Nested TorchLayer

Hello,
I am facing a rather complex scenario in an attempt to build a QCNN with nested elements.
Akin to classical CNN’s where we can have a torch nn.Conv2d repeatedly inside an nn.module, my code is as follows:

Assume I have a simple circuit as follows:

@qml.qnode(dev4)
def CONVCircuit(inputs, weights):
    print("Shapes: inputs={}, params_arr={}, q_bits={}, q_depth={}".format(inputs.shape, weights, n_qubits, n_layers))
    qml.templates.AmplitudeEmbedding(inputs, wires=[i for i in range(n_qubits)], normalize=True, pad_with=4)
    qml.templates.StronglyEntanglingLayers(weights, wires=range(n_qubits))
    exp_vals = [qml.expval(qml.PauliZ(position)) for position in range(n_qubits)]
    return (exp_vals) #type=<class 'list'>,shape=(4,)

This is the quantum eqvivalent of of the torch Conv2d layer which uses the circuit above:

class Q2DCNN(nn.Module):
    def __init__(self,circ,patch_size,n_qubits):
        super(Q2DCNN, self).__init__()
        self.patch_size=patch_size
        self.n_qubits=n_qubits
        self.weight_shapes = {"weights": (n_layers, n_qubits,3)}
        self.pqc = circ
        self.qlayer = qml.qnn.TorchLayer(self.pqc, self.weight_shapes)        

    def forward(self, x):
        # print("Q2DCNN start shape:",x.shape) # torch.Size([16, 16, 4])        
        x=self.process_image(x) 
        x=x.permute(0, 3, 1, 2) 
        # print("Q2DCNN final shape:",x.shape) # torch.Size([16, 16, 4])
        return x

Note that I declared the weights (the heart of the problem here) correctly inside q2dcnn.

Now my real QCNN is using the Q2DCNN several times, like so:

class QNN(torch.nn.Module):
    def __init__(self, circ,patch_size,n_qubits,n_classes=2):
        super(QNN, self).__init__()
        # weight_shapes = {"weights": (n_layers, n_qubits)}
        # self.pqc = circ
        self.patch_size=patch_size
        self.n_qubits=n_qubits 
        self.n_classes=n_classes               
        self.qconv2d1=Q2DCNN(circ,patch_size,n_qubits) # torch.Size([1, 1, 64, 64])  input
        self.qconv2d2=Q2DCNN(circ,patch_size,n_qubits) # from H/ to H / 16 

        self.weight_shapes = {"weights": (n_layers, n_qubits,3)}        
        self.qlayer2d1 = qml.qnn.TorchLayer(self.qconv2d1, self.weight_shapes)        
        self.qlayer2d2 = qml.qnn.TorchLayer(self.qconv2d2, self.weight_shapes)      

Of course this line: self.qlayer2d1 = qml.qnn.TorchLayer(self.qconv2d1, self.weight_shapes) throws an exception since TorchLayer is not expecting a torch layer (in my case Q2DCNN) but rather a circuit. BUT, how else can i associate a weight vector INSIDE the QNN(torch.nn.Module) to each of my self.qconv2d1=Q2DCNN(circ,patch_size,n_qubits) # torch.Size([1, 1, 64, 64]) input layers?

If on the other hand I delete these lines (which dont work anyway …):

 self.qlayer2d1 = qml.qnn.TorchLayer(self.qconv2d1, self.weight_shapes)        
self.qlayer2d2 = qml.qnn.TorchLayer(self.qconv2d2, self.weight_shapes)  

Then the following exception is thrown:

Cell In[3], line 189, in QNN.forward(self, x)
    186 def forward(self, x):
    187     # print("QNN start shape:",x.shape)        
    188     # x=torch.tanh(x) * np.pi / 2.0
--> 189     x = self.qconv2d1(x)        
    190     # print("QNN first conv, before stacking:",x.shape) # torch.Size([batch, n_qubits, 16, 16])
    192     f_maps = []  # List to store the feature maps        
...
    140             f"Weights tensor must have second dimension of length {len(wires)}; got {shape[1]}"
    141         )
    143     if shape[2] != 3:

How else custom torch.nn layers are supposed to be integrated into another torch.nn module?

Thanks.

Please ignore for the meanwhile … still debugging.

Hey @Solomon!

Please ignore for the meanwhile … still debugging.

Sure thing! Just respond back when you need our help, or let us know that you solved it :slight_smile:!