I am trying to implement a Ternary Grover’s algorithm circuit using PennyLane.

With cutoff dimension 3, can I implement the following matrices on one qumode?

I am trying to implement a Ternary Grover’s algorithm circuit using PennyLane.

With cutoff dimension 3, can I implement the following matrices on one qumode?

Hi @sophchoe,

Sorry I took so long to respond. I haven’t been able to find an answer to your problem. I will keep looking though.

Thank you @CatalinaAlbornoz!

I wanted to see how to implement any unitary matrix of size of n x n on a qumode with cutoff dimension n.

I want to set all qumodes to cutoff dimension 3 and then apply a ternary Hadamard to each. Maybe a 3x3 matrix with all 1/sqrt(3) except for the last two diagonal to be -1/sqrt(3) would be a better matrix.

To clarify, you want to send |2> to -|2> and leave vacuum and |1> as they are?

Hi @ziofil

More like .

I would like to know the general method of implementing n x n matrix on a qumode of cutoff dimension n.

Or a controlled gate on 2 qumodes with the use of beamsplitters.

Thank you.

You don’t have full freedom to implement any unitary on qumodes, unfortunately. If you use only Gaussian components at most you can implement a Gaussian unitary, and those don’t look like Gaussian unitaries.

Hi @sophchoe,

As Filippo mentioned, implementing any arbitrary unitary is not possible. For specific unitaries though you can synthesize them from other gates using ML methods. You can read more about this on this paper.

I hope this helps!

Edit: Filippo just did a quick check on Mr Mustard and unfortunately for your matrix you would need some non-gaussian gates (kerr gates or cubic phase).

Multi-valued logic problems are more related to digital circuit design, a totally different category than machine learning.

On a separate note:

I am running experiments on MNIST using all the available CV gates for data encoding. This allows me to encode more features into quantum and it seems promising.

I will let you know.

Thank you so much for all your help!

Thank you for the clarification about the multi-valued logic problems @sophchoe!

About your separate note, have you considered using datasets different to MNIST? It may be interesting to see the performance in other datasets too.

Yes, @CatalinaAlbornoz.

I’ve tried Breast Cancer Wisconsin (30 features) and it was in the 60% range for accuracy. I think it’s because the feature values are very skewed.

I am working with MNIST because the feature size is 784, which is very difficult to encode in the qubit model. We would need 784 qubits or 295 qubits. That’s the power of the CV model that it allows for encoding a higher number of features because of the availability of richer array of parameterized gates.

If you have any specific dataset in mind, please let me know.

Sophie

Interesting @sophchoe.

I just asked because often we choose datasets without any specific reasoning behind it and we fall into the error of thinking that because it worked for one dataset is must work for others too, which is usually not the case.

It reminded me of this blog by Maria Schuld where she motivates you to think about the benchmark you’re using.

I think you have an interesting reasoning to use MNIST but it’s worth reading Maria’s blog to get a different view on benchmarks too!

Great blog!

To analyze validity of quantum machine learning circuits as opposed classical methods, we must first build quantum circuits implementing classical machine learning algorithms.

There are qubit-based implementation examples of quantum classifiers, but they are limited to

- binary classification
- features of length less than 50.

I am focusing on building quantum machine learning circuits for image datasets where their features are around 1,000. How would they encode these features in the qubit model? I don’t see how with the current technology.

Google’s implementation of MNIST is reducing the 28 x 28 image matrices to 4 x 4 binary matrices.

IBM’s implementation of MNIST is composed only of a data encoding circuit (single Ry gate).

Without being able to encode data in quantum states, there won’t be a meaningful discussion about the performance of a quantum algorithm. I am exploring different ways of encoding image data samples to quantum states.

There are several papers by Maria Schulz about the importance of quantum data encoding. It is my observation that the CV model offers a richer array of data encoding schemes than the qubit-based model.

The limitation of the qubit-based quantum computing is that all the gates are unitary. The composition of unitary matrices is a single unitary matrix.

The CV model offers unitary gates, displacement gate, and nonlinear gates in a greater number than in the qubit-based model, offering much more parameters. That enables the data encoding circuit to encode a greater number of features from data.

I am observing some promising results.

I think the CV model is better suited for machine learning, where the features of data are continuous in nature. The qubit-based model is better suited for logic synthesis where the variables are discrete in nature.

That’s very interesting @sophchoe.

In our Blueprint for a Scalable Photonic Fault-Tolerant Quantum Computer (video here) we aim towards building qubits from GKP states, so in the end we will work with the qubit model.

I imagine you will still be able to work on the CV regime at low level though, and maybe promising CV results can motivate stronger efforts in that direction. So be sure to share your results with us when you have them!

I will be sad when you go fully qubit-based since I see so much benefit of the CV model. However, your existing X8 still has so much potential that has not been tapped into if only for theoretical purposes.

Thank you for the paper.

This diagram captures the idea of some of the problems that can be solved in the CV model than in the qubit-based model.

Yes, it is definitely an interesting area to explore @sophchoe!

I think that if you look at the big picture though, same as happened with classical computing, most applications will be tackled better with the discrete variable model while some specific ones will work better with the continuous model.

We’re still in the early days of quantum computing so we never know for sure how things are going to evolve, but it’s still important to think of applications with both models.

@CatalinaAlbornoz absolutely!!! So many Boolean logic problems can be translated to the qubit-based model and perform beautifully.

For data with continuous variable feature values, the CV model seems to be a better option.

1 Like