I modified the code from this tutorial to try training a hybrid quantum-classical neural network model (default.qubit + Torch + PennyLane) on a GPU. However, I found that the training speed on the GPU is almost the same as on the CPU. Does anyone know what might be wrong with my code?
Using a GPU isn’t always faster. It can actually be slower! There are some overheads when you use GPUs, meaning that for some circuits it’s not worth it. As a general rule, if you have 20 or more qubits and deep circuits then it makes sense to use GPUs.
You can see a really nice table with suggestions of when to use each device in our performance page.
The good news is, for small circuits your laptop is enough!