Hello, I am currently working on the autoencoder described by the paper “Continuous-variable quantum neural networks” and I am having trouble obtaining the Fock states, therefore I am stuck. I am having the same problem described in this post: State vector retrieval. I have reviewed her code and paper as well, she couldn’t complete the autoencoder because she couldn’t obtain the fock states from the network.
Is there a solution to this problem? Also I was wondering why the Github repo for the paper I have mentioned above didn’t contain the code for the autoencoder https://github.com/XanaduAI/quantum-neural-networks. Is there a software limitation for this autoencoder?
I would appreciate all ideas and suggestions.
Hi @brtymn, welcome to the forum!
The user from the post you mentioned did in fact create a demo with the auto encoder. You can find her notebook here.
Note that she used older versions of the different libraries so you might need to go back a few versions in case something doesn’t go right.
Please let me know if this helps! Otherwise feel free to share your code or clarify your specific question.
Hello @CatalinaAlbornoz. I am aware of that demo and I have been building up from that.
The problem is that she used MSE as the loss function because she failed to obtain the Fock states from the network, instead she measured the probabilities and proceeded to train the network like a classical one but that is not the method described in the paper I mentioned.
I have trouble obtaining the Fock states to define the loss function for the neural network training later on. The Fock device implemented in the StrawberryFields plugin does not work and I have seen the Github issue for that as well.
Do you know how the authors bypassed this problem while writing the paper?
I hope I made the problem clear, please let me know if I can provide more information. Thank you.
Oh thanks for clarifying @brtymn. I will ask one of the authors to see if they can help us here.
Awesome! Thanks a lot @CatalinaAlbornoz .
Hi @brtymn, one of the authors mentioned that they used versions of Strawberry Fields, TensorFlow and Python from 2018. They didn’t use PennyLane. He suggests that if the issue is in the PennyLane-SF plugin then the best option is to implement it all directly in PennyLane.
Do you want to try it this way and let us know how it goes?
Hello @CatalinaAlbornoz, thank you for the reply. I will try my best and update this thread if I make any advancements.
Hello @CatalinaAlbornoz I was unable to reproduce the quantum classical autoencoder using Pennylane, can you ask the authors if Pennylane can be used to construct the quantum classical autoencoder described in this paper [1806.06871] Continuous-variable quantum neural networks ?
Also since I was unable to do it with Pennylane, I have downgraded my Strawberry Fields and Tensorflow packages to the same ones the writers used and constructed the autoencoder without Pennylane just like it was described in the paper I mentioned. I would be happy to add it to your repositories in a few weeks (after I am completely sure I did it right ).
Hi @brtymn, yes it would be great if you could put your code into a repo and share it here for others to try it out.
The authors haven’t tried constructing this autoencoder in PennyLane so they don’t know if it’s possible. Did you encounter any particular roadblock in the implementation? Or did you get different results?
Hello @CatalinaAlbornoz, I was unable to define the cost function described in the paper since Pennylane didn’t let me use the state vector. Another user @sophchoe already mentioned this problem in a different post on this forum.
I will polish my code and share it here for everyone to see in a few weeks.
@brtymn You are absolutely right. However, if we consider the function of a classical encoder and a quantum decoder, the goal is to get an output size equal to the original vectors.
The cost function is there to measure how close the results of the circuit are to the original vectors and Mean Squared Error is just another way of measuring that. The probability method returns squared values of state vector entries.
The cost function in the paper has an added benefit of “regularization” term, which I think can be added to the MSE cost function. @CatalinaAlbornoz
Great insight @sophchoe! Does this help you @brtymn?
Thank you for the response @sophchoe, you have exactly described my thought process. Tensorflow allows the user to add “L2 Regularization” which has the same form as the penalty function in the paper, however, I have failed to carry out the Hilbert space projection without having the state vector.
I would love to hear more from if you have any ideas about how I can do the projection, this is my main roadblock. @CatalinaAlbornoz
With the one-qumode system proposed in the paper, I use cutoff dimension 3 to produce output vectors of size 3, equaling the input vector size.
Then the probability measurement method returns vectors of size 3. Thereafter, I think you can use any loss function of your choice. Maybe you can handcraft any function adding the regularization term you have in mind.
My circuit achieves 97% accuracy. Maybe with an added regularization term, it will perform better.
I think people seeing it both in Pennylane and in Strawberryfields is beneficial.
I have looked into your code as well and we are doing the same thing. As you have described, MSE is really close to the main term in the loss function described in the paper and an added regularization term can be used to define the exact loss function. The scores are really high with MSE alone as you said, but I will use this autoencoder to analyze physical systems so I want it to be the exact same physically as it is described in the paper.
However, the penalty term in the loss function requires a Hilbert space projection operation and I have failed to do that without having the state vector. I am able to (and also the authors did as well) carry out the projection with the older version of Strawberry Fields after obtaining the state vector.