Transfer learning error

@_risto interesting results!

Too bad, was hoping for an advantage of the quantum one.

Yes, this is often the case when researching and playing around with QML models :slight_smile:

Hi @josh

I have also run the model on resnext101, got the same results - classical performs better in regard to validation / test accuracy and time of training.

Hey @_risto, as @josh mentioned this may often be the case for such prototypical models. It’d be interesting to scale up the width (number of qubits) and depth of the quantum element and see how things compare, but this unfortunately becomes a challenge for simulators.

Hi @Tom_Bromley

Yes. I agree. I tried scaling up, but the simulator just can’t take it. The maximum I could punch is 16 qbits and this was very slow. And also with actual q-computers things might be different. Time will tell.
Btw. is Xanadu team planing to do some demos on the topic of quantum reservoir computing - QRC (https://www.nature.com/articles/s41534-019-0149-8/)? I believe the future of quantum computing is to manipulate quantum data in quantum algorithms, rather than just using quantum algorithms on classical data. QRC seems promising in that area, although the coding skills are far beyond my humble knowledge.

Btw. is Xanadu team planing to do some demos on the topic of quantum reservoir computing

Thanks for sharing, I’ll check it out. I’m not aware of any demos on this topic in the pipeline, though if you end up prototyping a solution yourself then please consider submitting it as a community demo - it’d be a great way to get people interested.

Hi @Tom_Bromley

Can I ask, would there be a way to send someone my two examples of resnet152 (classical & quantum version) just to check if all the parameters are actually the same (except the last layer)? Perhaps I have made a mistake and am doing injustice to the developers and the whole Xanadu team?

Hey @_risto!

One option would be to turn your examples into nicely-presented notebooks and submit them as community demos (instructions here), if you think the content would be sufficiently different to the transfer learning demo. In my opinion, it doesn’t matter too much if the end result shows a fully classical network performing better, just making the comparison and perhaps a discussion on why they are different is still of interest.

Hi @Tom_Bromley

I have noticed, that when using quantum simulator, almost all of the work is done on CPU and not GPU, which kinda slows the process of training / validation. When running a classical network, GPU is used exclusively. Is there any explanation for this? I am playing with changing different parameters to see the results, but due to CPU usage it takes much longer.
For example, if I run it on 32 qbits, my CPU usage shows up to 50% and GPU 0%. I don’t understand, why the simulator doesn’t use the GPU.
When I try 16 gbits, it starts, but then freezes at “Phase: train Epoch: 1/30 Iter: 8/62 Batch time: 143.9889” and CPU shows 0% and GPU 0%.

Hey @_risto!

I have noticed, that when using quantum simulator, almost all of the work is done on CPU and not GPU, which kinda slows the process of training / validation. When running a classical network, GPU is used exclusively. Is there any explanation for this?

When using the torch interface or TorchLayer, the backend calculations for the quantum circuit are all performed using NumPy and then converted into Torch tensors, along with the gradient. The reason this is done is because Torch only recently introduced support for complex numbers.

We have a WIP update that will allow the full calculation pipeline to remain within PyTorch. Doing so would allow for Torch tensors to live on the GPU and potentially utilize it more.

On the other hand, even when the full pipeline can remain on GPU, it is not guaranteed that the device will use the GPU as efficiently as possible. Using the full potential of the GPU is something that we’re thinking about for lightning.qubit, but I can’t give a firm timeline on when that would become available.

if I run it on 32 qbits

I’m impressed that you are able to push a quantum circuit to 32 qubits! This will be quite challenging and you may want to consider lowering the number of qubits while prototyping the model.

Thank you @Tom_Bromley

I changed the pagefile size to maximum possible, after that I did not get error after running 32 qbits, however it seems the system freezes.
Just today I was lookng at Pytorch Lightning, is this what you mean by lightning.qubit?
Also I have noticed in classical Resnet the training accuracy is slightly bigger than validation, but in quantum it is other way around. Does that imply that by it nature the quantum model is better in “guessing”?

Hey @_risto!

Just today I was lookng at Pytorch Lightning, is this what you mean by lightning.qubit ?

I am referring to our PennyLane-Lightning plugin, which is a C++ based device intended for high performance. It doesn’t currently support GPUs, but that is on the agenda.

Also I have noticed in classical Resnet the training accuracy is slightly bigger than validation, but in quantum it is other way around. Does that imply that by it nature the quantum model is better in “guessing”?

Perhaps for that specific model it could be said that the quantum element is helping to avoid overfitting, though I’m not aware that this is a phenomenon that holds in general.

Hi @Tom_Bromley

When I try to run it I get DeviceError: Device does not exist. Make sure the required plugin is installed. How do i install that?

Hey @_risto!

Here are the installation instructions - once installed make sure to restart your kernel.

I tried, but get ERROR: Failed building wheel for pennylane-lightning and then a bunch of red lines and no installed package.

@_risto, apologies that it’s not working for you! Could you share which system you’re running on? One reliable way to share this info is to copy the output of:

import pennylane as qml
qml.about()

@Tom_Bromley

Here it is:

Hey @_risto! We do not yet have pre-built binaries for Python 3.9 available through pip install. Would you be able to create a new environment with Python 3.8? If you use Conda, you can follow the instructions here.

Hi @Tom_Bromley

I get the error, when I want to install the package

In addition, is it possible to run the demo on pytorch-lightning?

Hey @_risto! Did you try to change your version of Python to 3.8? Unfortunately we do not yet have support for 3.9 and it looks like that is the version you are using.

In addition, is it possible to run the demo on pytorch-lightning?

It may well be possible to interface PennyLane with PyTorch-Lightning. Indeed, we have an open issue on GitHub discussing how we can provide a nice example, with transfer learning a likely candidate use case. However, we do not have a walkthrough just yet, so I’d be interested to see how it goes if you do try.

Also wanted emphasize (e.g., to other readers of this post) that lightning.qubit is designed to be a fast backend for PennyLane and isn’t intended specifically as a complementary feature to PyTorch-Lightning.

hi @Tom_Bromley , just want to make sure. Currently hybrid quantum model can only be trained on cpu right?