Pennylane and Pytorch running on GPU

Hey @Shawn,

I checked out the TorchLayer class – is it an open question if it could by chance run on a GPU?

Right now I’d say that either it doesn’t work on GPU, or it doesn’t yet work reliably on GPU and would suggest not using them. We haven’t prioritized GPU support yet since we were focusing on the core feature, so we’re really relying on users such as yourself and @mamadpierre in this post for feedback.

If so, is it in the pipeline to make the class work with certainty with GPUs?

It’s good to know that there’s some interest for this feature and we can add it to our to-do list. I can’t make any promises on when it will be available. As a side note if you’re interested, we always welcome contributors and this might be a nice and well specified thing to add.

To your last comment, will there be an option for GPU-based devices for continuous-variable systems soon? It just seems odd that with the non pennylane code, the code ran with CPU and GPU fine but the pennylane code takes a very long time to run with CPU.

I’d say the slow down is due to:

  • Fundamentally, we’re simulating a quantum system which in the strawberryfields.fock simulator scales exponentially with the number of modes. Unfortunately this is not something we can really get around with simulators, but a nice motivation for using hardware!

  • The code could be more optimized: we’re gradually adding performance improvements to elements of the code. For example, we’ve been implementing gates more efficiently in Strawberry Fields. I think there’s probably still room to optimize the performance running on CPUs before we concentrate on using GPUs.

Thanks!

1 Like