Trouble with GPU Training Speed in Hybrid Quantum-Classical Model

I modified the code from this tutorial to try training a hybrid quantum-classical neural network model (default.qubit + Torch + PennyLane) on a GPU. However, I found that the training speed on the GPU is almost the same as on the CPU. Does anyone know what might be wrong with my code?

cpu_vs_gpu.py (2.3 KB)

Hi @cyx617,

Using a GPU isn’t always faster. It can actually be slower! There are some overheads when you use GPUs, meaning that for some circuits it’s not worth it. As a general rule, if you have 20 or more qubits and deep circuits then it makes sense to use GPUs.

You can see a really nice table with suggestions of when to use each device in our performance page.

The good news is, for small circuits your laptop is enough!

Enjoy using PennyLane!

Hi @CatalinaAlbornoz ,

Thank you so much for your answer! I also really appreciate the link you shared, which is very helpful for me to choose the backend!

I’m glad it helped @cyx617 !