Inquiries on State Preparation in Variaitonal classifier example

Hi, I was wondering if the state preparation (feature embedding) used in the example below, could be proven or conjectured to be hard to simulate classically, i.e. providinig some degree of quantum advantage.

I also noticed that the Iris data set used (iris_classes1and2_scaled.txt) has already undergone some preprocessing, I would like to know what kind of preprocessing was applied to the data set.

Hey davidefrr,

This is amplitude encoding, which literally means you prepare an amplitude vector that resembles your data input. This is of course not classically hard. In fact, it is a rather complex procedure for a quantum circuit as you can see, while classically, you would not have to do anything.

The Iris dataset is scaled (by putting zero mean and unit deviation), and classes 1 and 2 were selected.

Hope this helps.

Hi @davidefrr and @Maria_Schuld,

I’ve been reading the paper https://arxiv.org/pdf/1804.11326.pdf " “Supervised learning with quantum enhanced feature spaces” where they implement a quantum feature map based on quantum circuits that are conjectured to be hard to simulate classically. Specifically, the second order expansion feature map. Does Pennylane contemplate to implement this feature map in its roadmap?

I’ve been playing with the available feature-embedding circuits such as basis , amplitude encoding circuits that are available in the library, but I haven’'t found this specific class of feature maps

Thanks!

Yes, we want to significantly extend the library of embeddings, and this would be one of the first ones to add. But in the meantime, feel free to code this up yourself and make a pull request :slight_smile: .