I tested the quantum transfer learning code provided in the demo, and applied it to my own data for a three-classification task. The results show that quantum transfer learning quantum transfer learning seems to classify correctly, with an accuracy of 75%.
However, I only modified the output of the linear layer to be 3(like the following code). I’m not sure if this approach is reasonable. Should I redefine the DressedQuantumNet? Such as adding the number of qubits or increasing the depth of the quantum circuit.
I have to say, quantum neural networks run very slowly on my device.
class DressedQuantumNet(nn.Module):
"""
Torch module implementing the *dressed* quantum net.
"""
def __init__(self):
"""
Definition of the *dressed* layout.
"""
super().__init__()
self.pre_net = nn.Linear(256, n_qubits)
self.q_params = nn.Parameter(q_delta * torch.randn(q_depth * n_qubits))
self.post_net = nn.Linear(n_qubits, 3)