I was reading the post Quantum natural gradient | PennyLane Demos
One question, is this QNGOptimizer well suited for QLSTMs? like [2009.01783] Quantum Long Short-Term Memory
In that paper and in any other QLSTM paper i see, people stick to classical optimizers like Adam
Hi @MARTILLOTO , welcome to the Forum!
The QNGOptimizer is generally more “tricky” to get working. In some cases you may need a metric tensor, and if you set your cost function to return more than one value it may not work.
I don’t know of any fundamental reason why they wouldn’t work together, although I may be missing something. I’d just say it’s easier to get things working with other optimizers such as Adam.
If you already have some code that works with Adam you can try changing it to QNGO. Alternatively, if you want to start with the QNG demo and change the code to a QLSTM that could work too. If you choose this second option make sure to note down any changes that you make to the original demo so that it’s easier to debug or ask here if you run into issues later on.
Let us know if you try it and how it goes!
Cool, thanks! I tried the first approach (subsitute Adam by QNGO) but couldnt make it work. If I figure out how to do it I will come back to you.
@MARTILLOTO , the key with QNGO is noting that it uses a metric tensor, which you may need to supply yourself in case your cost function depends on anything more complicated than a single QNode. I would also recommend to avoid batching since this can complicate things too.
One option you can try is defining a gradient transform (e.g., by inheriting from the param-shift transform) that returns quantum_gradient(tape) * metric_tensor(tape)
. Then, in your QNode you can set diff_method=custom_qngrad
, and use standard GradientDescent optimization. This is not straightforward, but it’s an option in case you want to try it.
If you’re struggling to make it work feel free to share a minimal but self-contained version of your code here and I can take a look.