Dear pennylane team,
Last year I created this post. Basically, I tried to run a highly entangled 30 qubit circuit and it took a couple of weeks to compute on a single core of an old laptop.
Few months ago, this cricuit was tackled by nvidia engineers with cu-quantum and A100 GPU. It took few minutes to compute the circuit and some more time to move the data.
I was wondering when cu-quantum will be a part of the pennylane library and if it is going to be a new device?
It is still not clear to me how to use default.qubit and lightning.qubit devices with a GPU. I naivly hope that with cu-quantum it is going to be as easy as setting some python flag.
Note that the lightning.gpu device has adjoint differentiation support, so this will likely be the most performant method if you are planning to perform training/optimization with this device.
hello, I just use v0.23.0 pennylane-lightning-gpu, but the optimizer can’t work. when I install the v0.20.0 pennylane, the optimizer can work. How can I do to make the v0.23.0pennylane-lightning-gpu work? thanks
Hi @shangshang_shi! Since this is a new question relating to lightning.gpu, do you mind creating a new post on the forum? This will ensure that it has high visibility and allow someone to help out sooner
When you do, it would be great to also include a short snippet showing the optimization that wasn’t working with lightning.gpu, as well as the error message you are getting. Thanks!