Mixed State GPU acceleration Pennylane

Hi,

I have been using the “default.mixed” backend to conduct error correction experiments. However, I have been looking to speed up my circuits using GPUs. To my knowledge, there is no support for mixed-state GPU acceleration using a Pennylane. I believe there is an IBM-qiskit backend that does, but this requires some work arounds to use for my work.

I checked out this post:

While it does say that there was an updated with regards to back propogation, I could not find any other information on implementation for anything other than back progogation(such as operations). If this has not been implemented yet, I would like to take a crack at implementing backend changes to support GPU acceleration.

With Thanks,

Connor

Hi @connor_gambla ,

As you noticed mixed-state GPU acceleration is not currently available in PennyLane. I’m unaware of whether or not there’s an IBM backend that supports this.

Regarding the post that you mention, backpropagation is indeed available for PennyLane’s default.mixed device.

Regarding your proposal for implementing backend changes to support GPU acceleration, I think this might be quite a big endeavour. If you want to try it, I think the best would be to consider this as an external library based on PennyLane, that basically acts as an optional plugin. This way you can work on it independently and you wouldn’t have to worry about merging it into PennyLane, which is unlikely to work out at the moment.

If you decide to add a demo in your repo showing how to use this feature, we could then possibly market it on social media and add it to our community demos page! What do you think?

Awesome! Looking forward to seeing your progress on this.

Wow this is amazing, thanks for sharing it @connor_gambla !

We’ll take a deeper look at your code but from what I can see it could make a good community demo.

For the README it might be useful to add just some versioning details for anyone wanting to use this in the future:

  • Python version
  • JAX version
  • PennyLane version
  • Qiskit and Qiskit-aer versions
  • What kind of GPU you used for the tests

Currently working on it. I believe there might be some tensor support that was added to default.mixed_state, which would allow it to run using GPU acceleration with minimal changes. Hopefully I can come up with something, and would be glad to let you guys know if I make any progress on it.

Best

Hi Catalina,

So I did a bit of digging the documentation. Turns out the default.mixed supports jax in its backend, which means that it can support GPU acceleration! Though by no means as optimized as something like lightning, it still offers a significant speed up. We can utilize jax.jit to accelerate the circuit the same way a circuit with something like state vector backends. Below is the link to a repo that contains a brief demonstration. Let me know what you guys think about the repo:

Link to demonstration: GitHub - cwgambla/Accelerating-Mixed-State-Simulation-Pennylane-With-GPUs

As far backend changes to default.mixed, there seems to be little more that I could do that would further accelerate things since the backend is already jax compatible. Building out a mixed state equivalent for lightning seems like a great long term project, but does not seem like something I could do by myself nor in a reasonable amount of time. However, if you guys are looking for some additional assistance in building out that or any other feature, please feel to reach out.

Thank you for your assistance, and I hope this helps.

Best,

Connor

Hi Catalina,

Just updated the repo with system details, inclduing the versions of Python, Jax, Pennylane, Qiskit, and Qiskit-aer(even though those last two were not used), along with the GPU information. Let me know if there is anything else you need from me, and what the next steps will be.

Best,

Connor

You’ve essentially reached the default limit.Mixed can work. Since it is already JAX-compatible, GPU acceleration using JAX is possible.Although jit is the primary route, it still involves complex density-matrix calculations with unappealing scaling. No lightning-style kernels, no secret CUDA operations.

Backend modifications won’t make much of a difference unless you redesign around structured noise or trajectories. Although it isn’t widely promoted or optimized, JAX-on-GPU *is* the mixed-state narrative in PennyLane at the moment.

Hi @connor_gambla ,

Thanks for sharing your latest analysis and the package versions to your repo!
Building out a mixed state equivalent for lightning does seem like too big of a project, so I’d say that your current solution is probably the best approach in terms of effort vs speedup.

Looking at your package versions I noticed that you’re not using the latest PennyLane version. If you want to upgrade to PennyLane v0.44 (current stable version) note that this only supports Python 3.11 and above. You don’t need to update PennyLane it but I’m flagging this in case you do decide to update it.

Hi @sanro , welcome to the Forum!

I agree that keeping things simple is best. I would clarify that running default.mixed with the JAX interface on GPU is just one option though. You also have the option of using it on CPU, and potentially with other interfaces, which may or may not be slower.
Of course if this specific configuration is what works best for you then that’s great!