Can quantum transfer learning be used for multi-classification?

I tested the quantum transfer learning code provided in the demo, and applied it to my own data for a three-classification task. The results show that quantum transfer learning quantum transfer learning seems to classify correctly, with an accuracy of 75%.

However, I only modified the output of the linear layer to be 3(like the following code). I’m not sure if this approach is reasonable. Should I redefine the DressedQuantumNet? Such as adding the number of qubits or increasing the depth of the quantum circuit.

I have to say, quantum neural networks run very slowly on my device.

class DressedQuantumNet(nn.Module):
    """
    Torch module implementing the *dressed* quantum net.
    """

    def __init__(self):
        """
        Definition of the *dressed* layout.
        """

        super().__init__()
        self.pre_net = nn.Linear(256, n_qubits)
        self.q_params = nn.Parameter(q_delta * torch.randn(q_depth * n_qubits))
        self.post_net = nn.Linear(n_qubits, 3)

Hi @sunny_chan,
Welcome to the Forum!

Our demos are not guaranteed to work with any dataset so if you can share a self-contained version of your code (meaning something we can run and test ourselves) our team can help you better.

QNNs can be very slow if you have circuits with a lot of qubits or very deep, and they’re generally slower than classical ones in any case.

I’m not sure what your dataset is like so depending on that you will need to make different modifications.

I hope this helps you!

yes , i am doing same thing and its performance is better than classical

That’s interesting @Alok_Kumar_Srivastav . And welcome to the Forum!