Hi, I’m making a code which deals with a dataset of 32 features and main goal of it is to work as a binary classifier with quantum operations in it. So I implemented the class using PyTorch interfaces, and here’s the a definition of a class derived from torch.nn.Module

```
class HybridNNV1(torch.nn.Module):
def __init__(self, input_dim, output_dim, weight=None):
super(HybridNNV1, self).__init__()
num_qubits = int(np.ceil(np.log2(input_dim)))
dev = qml.device("default.qubit", wires=num_qubits)
@qml.qnode(dev, interface='torch')
def circuit_amp_emb(inputs):
AmplitudeEmbedding(features=inputs, wires=[i for i in range(num_qubits)])
return [qml.expval(qml.PauliZ(i)) for i in range(num_qubits)]
@qml.qnode(dev)
def circuit_emb_ent(inputs, w):
AngleEmbedding(features=inputs, wires=[i for i in range(num_qubits)], rotation='Y')
StronglyEntanglingLayers(weights=w, wires=[i for i in range(num_qubits)], ranges=None, imprimitive=CNOT)
return qml.probs(wires=0)
weight_shapes = {"w": (2, num_qubits, 3)}
self.__circuit_amp_emb = circuit_amp_emb
self.__qlayer = qml.qnn.TorchLayer(circuit_emb_ent, weight_shapes)
def forward(self, x):
output = None
a = []
for elem in x:
embedding_res = self.__circuit_amp_emb(elem)
a.append(embedding_res)
b = torch.stack(a)
output = self.__qlayer (b)
```

Since I heard that amplitude embedding is non-differentiable, I used AmplitudeEmbedding template with input tensor (32-length of classical feature data), then gathered expectation value from each of 5 qubits with Pauli-Z observable. Then I stacked them and transformed them to new tensors, then used another qnode of a AngleEmbedding and StronglyEntangling Layers. Finally I measured probability measurement of qubit 0 and let this value to be fed into torch.nn.CrossEntropyLoss.

Due to the restriction of hardware resource for Simulator, I think this is the best approach I can imagine. Do you have any brilliant and nice approach to build a class derived from torch.nn.Module to successfully classify 32-feature dataset to a binary classes?

Thanks