Multiclass Classification with Variational Circuits

Hey Pennylane Team , I was going through the example notebooks and specifically the Example Q3 - Variational Classifier and I wanted to understand the correct way to adapt this script to be able to predict the Iris dataset for multi-classes. I know in this paper it mentions that these circuit-centric quantum classifiers could be operated as a multi-class classifier, but it only does a “one-versu-all” binary discrimination subtask. I did not see any other examples that tried to do this so I wanted to get some feedback on the approach I was thinking.

We return a tuple of the measurements of all the wires from the circuit
(qml.expval.PauliZ(0), qml.expval.PauliZ(1))

Passing these values through the np.sign(measurements) we would have 4 possibilities that map to 4 classes ([0,0], [0,1], [1,0], [1,1]) then that error would be passed through cross entropy and optimized with the one step. Although, this means that we are only able to classify 2**n amount of classes as an upper limit. Is there a more scalable way to implement a multi-class classification or are we limited to the number of qubits just as how we are limited to the number of features we can amplitube encode based on the number of qubits. Any insight would greatly help. Thanks!

1 Like

Pinging @Maria_Schuld, who might know some other results regarding the question

Yes, using multiple qubits would be my first attempt too. And you could always use more qubits altogether, but only include the ones you do not need for information encoding at later layers in the circuit…This is all very much unexplored research!

1 Like

Thanks! In reference to the information encoding, if we are using for instance 3 Qubits, is it necessary to have the input feature space padded to 2^3(Qubits) or is there another way to encode a smaller feature space without having to pad the original data.

I ended up getting the multi-classification to work for Iris with 3 Qubits, but it took redefining a new loss function that was able to work with the Autograd ArrayBox elements and their native functions that are created in the optimization step. I found that, I was running into a lot of errors when passing (grad_fn = autograd.jacobian) since in the advanced usage documentation it suggested to use the jacobian when working with multiple expectation values. The default grad ended up working fine with the new loss function.

@josh maybe you want to comment on the jacobian?

Well done for making the 3-class version work!

If you want to encode your inputs into amplitudes, you won’t get around padding the inputs to a power of 2. This can actually be quite useful, because it includes a mini-feature map that can increase the power of a linear classifier.

But there are of course many other ways you could encode your features, so you do not have to pad or normalise.

Hi @CanePunma, could you elaborate more on this behavior/the error message? A minimal (non)-working example would be great. Alternatively, if this is something you believe could be a bug in PennyLane, you could cross-post it to our GitHub issue tracker.