Package for classification

Hi Pennylane team!

I saw that many of the latest demos are about classification, and I thought it would be nice to have a dedicated Pennylane package specific for classification, with modules that specialize by approach (e.g. trainable embeddings, dressed NNs, etc.) or by algorithm class (SVM, transfer learning, CNNs).

Based on the desired level of granularity, the package could offer:

  • a layers module to include layers that are not present (yet?) in PennyLanes’ templates/layers;
  • a components module that builds on the layers module to help with the creation of a circuit (e.g. defining a general measurement or a specific swap_test);
  • a qnn module that, building on the components module, could contain example QNN architectures or classes that wrap QNN architectures such as DressedQuantumNet (which could also offer decorators);
  • a training module that could contain general train methods or specific train_torch_model or train_keras_model methods.

The dependencies would be basically the framework(s) we would like to include, for instance torch. I have created a temporary repository to start with some experiments, building on the quantum transfer learning demo: https://github.com/nvitucci/pennylane-classification (Apologies if I cut the original code here and there - it’s only for example purposes)

I think having such a module (whose name and scope may very well change upon your suggestions) would lower the barrier to create hybrid classical-quantum classifiers. I would love to get some feedback!

Nicola

1 Like

Hi @nvitucci!

Welcome to the Xanadu forum and thanks for this suggestion. Contribution and feedback with regards to PennyLane are always welcome and everyone is encouraged to join in.

We’re discussing your suggestion internally and will get back to you as soon as possible. :slight_smile:

Hey @nvitucci,

Thanks for providing these details, it’s great that you’re interested in lowering the barrier for hybrid ML in PennyLane!

We think that the best way to proceed is to start working on a small initial feature and keep iterating as we go. In particular, it would be interesting for users to have access to commonly-used cost functions in ML. For example, perhaps we could add an MSECost class similar to the current VQECost that allows users to combine an ansatz circuit with some target observables and then derive a cost function from their expectation values. An example flow would look like:

def ansatz(weights, x=None, **kwargs):
    qml.AngleEmbedding(x, wires=[1, 2, 3])
    qml.templates.StronglyEntanglingLayers(weights, wires=[1, 2, 3])

observables = [qml.PauliZ(0), qml.PauliX(0), qml.PauliZ(1) @ qml.PauliZ(2)]
>>> cost = qml.qnn.MSECost(ansatz, observables, device)
>>> cost(weights, x=x, y=y)
0.54657

This feature would mainly involve wrapping around the existing map() function but would be a useful addition for regression/classification.

However, we’re also interested if you have any ideas along these lines? Our preference is to start out focusing on core, flexible features such as the one above and it would be great to interface with QNodeCollections to give some flexibility on the chosen observables.

I also wanted to point out that there is a qnn module in the latest version of PennyLane (https://pennylane.readthedocs.io/en/latest/code/qml_qnn.html). This module is currently targeted at converting PennyLane QNodes to Keras and PyTorch layers, but something like MSECost could also go there as well.

We can of course discuss more here, but the next step to get started is to make a PR on our PennyLane GitHub repo as outlined here.

Thanks and don’t hesitate to ask any questions!
Tom

Hi Tom,

Thanks for your reply! I think it makes sense to start small, and this feature will definitely be useful beyond classification purposes. I have a few questions:

  • Would you then recommend that I fork the main codebase and work there directly, submitting PRs when all the requirements are met?
  • Based on the name of the class, are you thinking of adding more cost functions (say, RMSECost) in the future? In this case, would it make sense to think of a Cost (abstract?) class?
  • I see that VQECost is in its own vqe module, and your example (as you also suggest in the following) uses the qnn module. What would you suggest to do:
    1. create a cost package parallel to vqe;
    2. add MSECost in a cost module within qnn;
    3. add MSECost to a new subpackage (e.g. qnn.cost).
  • Given that qnn contains only a Keras converter at the moment, would it make sense to start thinking of a PyTorch converter to add to qnn as well?

Thanks,

Nicola

Hi @nvitucci,

That’s great, I’m excited to see this moving forward!

In response to your questions:

  • Yes, you can create your own fork of PennyLane and make a PR from your fork to the master branch of PennyLane. I’d recommend making a draft PR quite early in the process to open up for feedback.
  • That’s a good question! It might be nice in future to add more cost functions like this and a Cost base class might make sense. For now, I’d recommend focusing on the core MSECost feature and then we can evaluate what parts might need to be abstracted after things have fallen into place.
  • It’s always interesting deciding where things should live! For me, the qnn module would be a good home for this feature since qnn should contain more high level ML tools like MSECost. To do this you can add a cost.py module within the pennylane/qnn/ folder of the repo. Of course, other PL devs might have a different opinion, but we can discuss more during code review!
  • Great suggestion, we are going to add a TorchLayers feature soon, it is currently a PR that is about to be merged. However it’s great that you’re thinking of new content and it would be good to share any more ideas you have while adding MSECost.

Thanks and let me know if you need any more help!
Tom

@nvitucci, regarding the location of MSECost:

For the initial draft PR, probably best to keep it confined to its own file mse.py. This provides us the flexibility to decide where it goes and easily move it when required, whether this is pennylane/cost/mse.py or pennylane/qnn/mse.py :slightly_smiling_face:

1 Like

Hi Tom, Josh,

Thanks for the feedback! Also, it’s great to know that PyTorch is being integrated even more :slight_smile:

I will start with a draft PR then. Do you have any specific requirements or preferences for a draft?

Nicola

That’s great @nvitucci!

Nope, you are free to make a draft PR at any point, e.g., even with just an empty mse.py file to begin with. The only thing you need to do is set it as a draft PR. I sometimes add “WIP” to the start of the title as well as add the WIP tag to really emphasize. This way we can help with feedback on the code as you develop.

Thanks!

Hi Tom,

I have just raised a draft PR as you suggested. I had a doubt on what the MSE “part” ought to be, but I thought it’s easier to discuss on the actual code :slight_smile:

Besides the actual values in the tests, is this more or less what you had in mind?

Thanks @nvitucci, that’s awesome! From my quick check it looks like what we had in mind. We’ll have a closer look through and leave some comments on the PR!

Great PR @nvitucci! Thanks for the contribution :rocket:

It would be great to keep going with a follow up addition. We’d like to suggest the following two ideas:

  1. Provide quantum-aware optimizers for the PyTorch and TensorFlow interfaces. Currently, PennyLane offers a selection of optimizers for the NumPy interface. Most of these optimizers, such as AdamOptimizer already have a similar version in PyTorch/TensorFlow. However, we also provide QNGOptimizer, RotosolveOptimizer, and RotoselectOptimizer that are quantum-aware in the sense that they use information about the quantum circuit during optimization. It would be great to provide these optimizers for users interacting with the PyTorch and TensorFlow interfaces. There is already a discussion and a prototype here for a bit more context.

  2. Provide decompositions of qubit Hamiltonian matrices into linear combinations of Pauli matrices. This is quite a useful feature for breaking down arbitrary measurements into ones realizable on hardware and should be relatively simple. There is already a PR here looking at this but it looks like it’s lost momentum, so it would be good to start afresh with a new PR and get this feature across the line.

The first idea requires a bit of thinking and likely some back and forth of ideas and prototypes with us, so potentially we could kick-off both suggestions at the same time, doing the easier second one while thinking about the design for the first.

Thanks and happy to discuss more if not clear!
Tom

Thanks @Tom_Bromley, my pleasure!

Now, coming to your suggestions:

  1. Provide quantum-aware optimizers for the PyTorch and TensorFlow interfaces.

This one is quite exciting! As you said I need first of all to read carefully through the code though, then to start designing. I will do it in the next days.

  1. Provide decompositions of qubit Hamiltonian matrices into linear combinations of Pauli matrices.

As I understand, you want a method that given this:

array([[0.5, 0. , 0. , 0.5],
       [0. , 0. , 0. , 0. ],
       [0. , 0. , 0. , 0. ],
       [0.5, 0. , 0. , 0.5]])

will return something like this:

0.5 * (
    np.kron(0.5 * (I + Z), 0.5 * (I + Z)) + 
    np.kron(0.5 * (I - Z), 0.5 * (I - Z)) + 
    np.kron(0.5 * (I + Z) @ X, 0.5 * (I + Z) @ X) + 
    np.kron(0.5 * (I - Z) @ X, 0.5 * (I - Z) @ X)
)

in a suitable format. Is that correct? Do you already have some kind of requirements or would we start with a clean slate?