I wanted to ask if there are any plans to add support for
lightning.qubit on PyTorch?
At the moment I use
backprop. I switched from
parameter-shift, since compared to
backprop is of course miles faster. However, using
backprop makes forward evaluation of my circuit slower, which I suspect would be faster if using
lightning.qubit instead. In some setups (when there aren’t many parameters), the gain I make by using
backprop is lost due to forward quantum circuit evaluation.
Following Backpropagation with Pytorch, I tried
adjoint, however there are some operations I use that are not supported by adjoint, so that’s not a valid option for my use case.
To roughly illustrate the idea, here are combined run times for 1000 iterations for a simple 2 qubit circuit with 3 learnable parameters:
- Circuit forward evaluation: 3s-4s
- .backwards(): 13s-15s
- Circuit forward evaluation: 2s
- .backwards(): 7s-9s
- Circuit forward evaluation: 6s-7s
- .backwards(): 1s
So it is clear that there is some overhead for forward evaluation because of
backprop, however I am wondering how this overhead would look like for
P.S. I also noticed that using
backprop on PyTorch (as well as
default.qubit.torch) gives the following warning message:
…/lib/python3.8/site-packages/torch/autograd/__init__.py:154: UserWarning: Casting complex values to real discards the imaginary part (Triggered internally at …/aten/src/ATen/native/Copy.cpp:244.)
Is this supposed to be happening?