Comparing quantum natural gradient with Adam

Hi,

I am trying to reproduce the results in the Quantum natural gradient paper [1]. In this work, the authors introduce a novel method called Quantum natural gradient (QNG) to optimize parametrized quantum circuits faster than with common optimizers, such as Adam.

In Fig. 3 the authors show clearly that the QNG optimizer outperforms Adam and other optimizers. QNG reaches the ground state energy after a few optimization iterations (approx. 15), though other methods either fail to reach that accuracy or need almost 100 optimization steps.

I follow the implementation of the barren plateau circuit introduced in [2]. Although I am using the exact same hyperparameter used in the publication, I am unable to reproduce the results.

The plot shows the optimization trajectories as mean and variance for four randomly initialized barren plateau circuits with nine qubits and five layers.

comparison

QNG seems to perform similarly to Adam in terms of minimizing the quantum circuit.

The code to reproduce the figure can be found here.

I would appreciate it if someone could help me figure out if there is a mistake in the implementation or a conceptual mistake. Does someone have similar experiences with QNG?

macOS Monterey
pennylane version: 0.24.0
Python 3.8.13

[1] J. Stokes, J. Izaac, N. Killoran, and G. Carleo, Quantum
Natural Gradient, Quantum 4, 269 (2020).

[2] J. R. McClean, S. Boixo, V. N. Smelyanskiy, R. Bab-
bush, and H. Neven, Barren plateaus in quantum neural
network training landscapes, Nature Communications 9,
4812 (2018).

Hi @davidos. Welcome to the forum!

I’ve looked through your code but I’m not sure what is causing the results you’re getting. Have you tried running the demo here?

My recommendation would be using this demo as your baseline. Please let me know if this is helpful or if you keep finding the same results.

Note: you may need to upgrade your version of PennyLane for the demo to run properly.

Hi Catalina,

I used the circuit presented in the tutorial. The code reproduces the results shown in the tutorial (I also added Adam to the comparison).

pennylane_post

Also using the tutorial as my baseline for the implementation did not resolve the issue. I still find that QNG performs similarly to Adam.

Thank you for the help.

Best,
David

Hi @davidos, thank you for sharing your results here.

I have 3 theories of why you’re getting these results:

1 - You’re doing something wrong (although it’s hard to determine what it can be).
2 - In the past 2 years the development of PennyLane has led to a reduction in the advantage of using QNG.
3 - The advantage of using QNG as shown in the paper doesn’t necessarily carry over to other circuits (it’s very hard to rigorously say that one optimization method is universally better than all others: https://en.wikipedia.org/wiki/No_free_lunch_in_search_and_optimization).

It can very well be a combination of the three.

Please let me know if you have any further thoughts or questions!

Option 4 is that there could be a problem in the original paper. This can happen.

Hi @CatalinaAlbornoz,

Thanks for sharing your thoughts.

I looked into point 2 and it turns out that the Adam optimizer has improved quite a bit over the years. There is still some gap between QNG and Adam for this particular circuit but the gap seems to be significantly smaller compared to the PennyLane version 0.6.0.

Thanks again for the help!

Hi @davidos,

Your analysis is great! Thanks for sharing this graph. It shows a very different picture.