Hey guys!
I’ve been trying to run some experiments on QNNs, but I wanted to know wheter it’s possible do have varying layer widths. For example, let’s say I have a dataset of 10 features and 4 classes, and I wanted to build a quantum architecture of 5 layers, each of size (10, 5, 5, 5, 4). Would it be possible to do without relying on classical layers?
I’ve been working with the following code, taken from the hybrid transfer-learning paper (please, if there’s a better way to do it, let me know):
quantum_device = qml.device("default.qubit", wires=n_qubits) def H_layer(nqubits): #Layer of single-qubit Hadamard gates. for idx in range(nqubits): qml.Hadamard(wires=idx) def RY_layer(w): """Layer of parametrized qubit rotations around the y axis. """ for idx, element in enumerate(w): qml.RY(element, wires=idx) def entangling_layer(nqubits): """Layer of CNOTs followed by another shifted layer of CNOT. """ # In other words it should apply something like : # CNOT CNOT CNOT CNOT... CNOT # CNOT CNOT CNOT... CNOT for i in range(0, nqubits - 1, 2): # Loop over even indices: i=0,2,...N-2 qml.CNOT(wires=[i, i + 1]) for i in range(1, nqubits - 1, 2): # Loop over odd indices: i=1,3,...N-3 qml.CNOT(wires=[i, i + 1]) @qml.qnode(quantum_device, interface="torch") def quantum_net(q_input_features, q_weights_flat, q_depth, q_width): """ The variational quantum circuit. """ # Reshape weights q_weights = q_weights_flat.reshape(q_depth, q_width) # Start from state |+> , unbiased w.r.t. |0> and |1> H_layer(q_width) # Embed features in the quantum node RY_layer(q_input_features) # Sequence of trainable variational layers for k in range(q_depth): entangling_layer(q_width) RY_layer(q_weights[k]) # Expectation values in the Z basis exp_vals = [qml.expval(qml.PauliZ(position)) for position in range(q_width)] return tuple(exp_vals)
and for my class QNN, I have:
class DressedQuantumNet(nn.Module): """ Torch module implementing the *dressed* quantum net. """ def __init__(self): """ Definition of the *dressed* layout. """ super().__init__() self.pre_net = nn.Linear(512, n_qubits) self.q_params = nn.Parameter(q_delta * torch.randn(q_depth * n_qubits)) self.post_net = nn.Linear(n_qubits, 2) def forward(self, input_features): """ Defining how tensors are supposed to move through the *dressed* quantum net. """ # obtain the input features for the quantum circuit # by reducing the feature dimension from 512 to 4 pre_out = self.pre_net(input_features) q_in = torch.tanh(pre_out) * np.pi / 2.0 # Apply the quantum circuit to each element of the batch and append to q_out q_out = torch.Tensor(0, n_qubits) q_out = q_out.to(device) for elem in q_in: q_out_elem = quantum_net(elem, self.q_params).float().unsqueeze(0) q_out = torch.cat((q_out, q_out_elem)) # return the two-dimensional prediction from the postprocessing layer return self.post_net(q_out)
So, from my understanding, self.q_params is the quantum net itself. How could I have different layer widths in this context? I also wanted to eliminate the post_net, that is, make the middle and final part of the network fully quantum. Is it possible, or do I necessarily need equal layer sizes?
I saw this similar post where Tom Bronley talks how it can be achieved, but it’s for Continuous-variable quantum neural networks: (Possible to create a QNN like classical one?)