QNN with varying layer WIDTH

Hey guys!

I’ve been trying to run some experiments on QNNs, but I wanted to know wheter it’s possible do have varying layer widths. For example, let’s say I have a dataset of 10 features and 4 classes, and I wanted to build a quantum architecture of 5 layers, each of size (10, 5, 5, 5, 4). Would it be possible to do without relying on classical layers?

I’ve been working with the following code, taken from the hybrid transfer-learning paper (please, if there’s a better way to do it, let me know):

quantum_device = qml.device("default.qubit", wires=n_qubits)

def H_layer(nqubits):
    #Layer of single-qubit Hadamard gates.

    for idx in range(nqubits):
        qml.Hadamard(wires=idx)


def RY_layer(w):
    """Layer of parametrized qubit rotations around the y axis.
    """
    for idx, element in enumerate(w):
        qml.RY(element, wires=idx)



def entangling_layer(nqubits):
    """Layer of CNOTs followed by another shifted layer of CNOT.
    """
    # In other words it should apply something like :
    # CNOT  CNOT  CNOT  CNOT...  CNOT
    #   CNOT  CNOT  CNOT...  CNOT
    for i in range(0, nqubits - 1, 2):  # Loop over even indices: i=0,2,...N-2
        qml.CNOT(wires=[i, i + 1])
    for i in range(1, nqubits - 1, 2):  # Loop over odd indices:  i=1,3,...N-3
        qml.CNOT(wires=[i, i + 1])

    


@qml.qnode(quantum_device, interface="torch")
def quantum_net(q_input_features, q_weights_flat, q_depth, q_width):
    """
    The variational quantum circuit.
    """

    # Reshape weights
    q_weights = q_weights_flat.reshape(q_depth, q_width)

    # Start from state |+> , unbiased w.r.t. |0> and |1>
    H_layer(q_width)

    # Embed features in the quantum node
    RY_layer(q_input_features)
    # Sequence of trainable variational layers
    for k in range(q_depth):
        entangling_layer(q_width)
        RY_layer(q_weights[k])

    # Expectation values in the Z basis
    exp_vals = [qml.expval(qml.PauliZ(position)) for position in range(q_width)]
    return tuple(exp_vals)

and for my class QNN, I have:

class DressedQuantumNet(nn.Module):
    """
    Torch module implementing the *dressed* quantum net.
    """

    def __init__(self):
        """
        Definition of the *dressed* layout.
        """

        super().__init__()
        self.pre_net = nn.Linear(512, n_qubits)
        self.q_params = nn.Parameter(q_delta * torch.randn(q_depth * n_qubits))
        self.post_net = nn.Linear(n_qubits, 2)

    def forward(self, input_features):
        """
        Defining how tensors are supposed to move through the *dressed* quantum net.
        """

        # obtain the input features for the quantum circuit
        # by reducing the feature dimension from 512 to 4
        pre_out = self.pre_net(input_features)
        q_in = torch.tanh(pre_out) * np.pi / 2.0

        # Apply the quantum circuit to each element of the batch and append to q_out
        q_out = torch.Tensor(0, n_qubits)
        q_out = q_out.to(device)
        for elem in q_in:
            q_out_elem = quantum_net(elem, self.q_params).float().unsqueeze(0)
            q_out = torch.cat((q_out, q_out_elem))

        # return the two-dimensional prediction from the postprocessing layer
        return self.post_net(q_out)

So, from my understanding, self.q_params is the quantum net itself. How could I have different layer widths in this context? I also wanted to eliminate the post_net, that is, make the middle and final part of the network fully quantum. Is it possible, or do I necessarily need equal layer sizes?

I saw this similar post where Tom Bronley talks how it can be achieved, but it’s for Continuous-variable quantum neural networks: (Possible to create a QNN like classical one?)

Hi @jogi_suda, welcome to the PennyLane forum!

You can have different layer sizes, however they tend to be the same because with more parametrized gates you can get more expressivity of your quantum circuit to represent your model. Maybe the conversation here can be helpful.

Please let me know if this answers your question or if you have further questions!

Thanks, @CatalinaAlbornoz ! Very helpful insight.

So, for example, say I have 5 features, and 3 classes. I want a variational circuit of depth = 4. The qnn layer sizes should be [5,5,5,3] (where this final layer of size = 3 is achieved by the post_net downprojection); if I wanted to achieve this without the classical post_net step (I want a fully quantum model), I just have to declare an architecture of sizes [5,5,5,5], and in the last layer, instead of measuring all 5 qubits, I can choose any subset of 3 qubits as my output, and then train on it?

So, for example, the code above was returning the expected values for all qubits (q_width = 5):

# Expectation values in the Z basis
exp_vals = [qml.expval(qml.PauliZ(position)) for position in range(q_width)]

Instead of returning all measurements, I’d just return the first three:

# Expectation values in the Z basis
exp_vals = [qml.expval(qml.PauliZ(position)) for position in range(3)]

Am I in the right path?

Hi @jogi_suda,

I’m not familiar with post_net downprojection but what you propose does sound reasonable.

You might also find some inspiration from this ensemble classification demo. The problem at hand is different but it does include the case of measuring only a subset of the qubits. I hope you can find it insightful.

Please let me know if this helps!

Hi @jogi_suda,

I’m not familiar with post_net downprojection but what you propose does sound reasonable.

You might also find some inspiration from this ensemble classification demo. The problem at hand is different but it does include the case of measuring only a subset of the qubits. I hope you can find it insightful.

Please let me know if this helps!

Thx! Very insightful indeed. One last question that I had: if instead of having a sequence of N layers of 5 qubits, and measuring only 3 qubits at the end, would it be possible to add another “block” with more qubits? For example: instead of [5,5,5,3], have [5,5,5,8].

Hi @jogi_suda,

I’m not sure I understand the question.

Do you want to have a quantum node, then classical post-processing and then a new quantum node with more qubits? If this is the case then yes, you can do this. You will simply need to define a new device with a larger number of qubits, and a new quantum circuit where you use those extra qubits.

If your question is if you can grow the number of qubits from one iteration to the next one, the answer is no. Your qnode is defined for a specific device, and that device has a fixed number of qubits.

If I got your question totally wrong please let me know.

I hope this helps!

I think what I wanted to do was have a different number of qubits from one layer to another (without the need for a classical layer in-between). Since I need a fixed number of qubits, now I know what I should do; thank you for the help!

I’m glad I could help @jogi_suda!

The classical layer is not really necessary, but you will need different qnodes attached to different devices in order to change the number of qubits.

Enjoy using PennyLane and see you at QHack!