Artificial NN to Quantum ANN

Hi, I have a classical Feed-forward Deep Neural Network model for Binary Classification consisting of 4 hidden nodes (Dense layers) + 2 Dropout layers + L1 and L2 Regularization.

I want to develop a Quantum version of this model. How can I do that? Also, how to build a simple ANN for binary classification? I just know the basics of Quantum Computing.

Data details -
Independent variables - Numerical values
Target variable - Binary values

Thank you.

1 Like

Consider using this demo that is easy to use and implements binary classification for a dataset up to 20 features. . Let me know if you have any questions.

p.s. in this paper we describe this demo :


Welcome to the forum @ilkayn!

Great demo suggestions @sophchoe and @NikSchet! Just to complement, I add a few thoughts here.

If you only know the basics of quantum computing I would suggest the following guidelines:

  • Your quantum version of this model will probably be a quantum-classical hybrid in fact.
  • The quantum part will be contained into what we call a qnode in PennyLane.
  • Your qnode will receive some fixed inputs and some variable parameters. The inputs will have to be encoded in your circuit and the parameters will go into gates in your circuit. Part of the question is to choose how you will encode the data and how you will decide which gates to use in your circuit.
  • You can interface classical layers in Keras or Torch with quantum layers using PennyLane.

I encourage you to check out this blog on how to start learning quantum machine learning, and if you want to do a deeper dive into quantum computing the Xanadu Quantum Codebook is a great resource.

You can also find a lot of additional tutorials on the PennyLane website in the QML section, and some PennyLane tutorial videos in the Xanadu YouTube.

I hope this helps and please post any follow-up questions you get!

1 Like

Great work!!!

Your method is based on the qubit-model of quantum computing. X8 allows for the QNN based on the continuous variable model, which contains nonlinear gates. This nonlinearity at the gate level without measurement in the network proposed by Killoran et al. (2019) provides a way of implementing quantum “deep learning” in the true sense.

1 Like

Will go through this, thank you!

Btw in this demo you can add classical dropout layers to prevent overfitting and reduce loss. IF you are willing to do some coding you can add some quantum noise that must have the same effect as dropout.

I assume you run classical simulations since your dataset is quite big.

Some tips that might be also helpful

  1. Use scaling to map your data to values : [0-2pi] or [-pi,pi] or [0-1]
  2. You can try using tanh activation function for the feeding classical layer
  3. Increase number of blocks [this will impact heavily training time]
  4. Try using the Quantum layer in the beggining of the sequential network followed by classical layers, sometime this increases the metrics since the Quantum layer acts as an encoder of data

Great ideas!!! Thank you so much!

I was using the classical layer to reduce the size of MNIST features.

But for smaller feature datasets, experimenting with quantum layers first sounds great.

Thank you.

Hello again! thank you!

Yes, quantum layer at the beggining should act as quantum embedding. It would do the same job as in this demo

The quantum layer in the demo has no learning. It’s a pre-processing step for a classical convolutional neural network. Learning is happening only in the classical network.

Adding a classical network to a quantum network for post-processing as you suggested is an interesting idea that can be explored. So many possibilities, indeed!

Hi @CatalinaAlbornoz

NIST is looking for industry partners.

There seems to be active research going on in realizing optical quantum memory, which can be incorporated into Xanadu QPUs for more complex computing.

I think this is an exciting opportunity for Xanadu.

Hey @sophchoe! That’s awesome to see!