I am trying to reproduce the results in the Quantum natural gradient paper . In this work, the authors introduce a novel method called Quantum natural gradient (QNG) to optimize parametrized quantum circuits faster than with common optimizers, such as Adam.
In Fig. 3 the authors show clearly that the QNG optimizer outperforms Adam and other optimizers. QNG reaches the ground state energy after a few optimization iterations (approx. 15), though other methods either fail to reach that accuracy or need almost 100 optimization steps.
I follow the implementation of the barren plateau circuit introduced in . Although I am using the exact same hyperparameter used in the publication, I am unable to reproduce the results.
The plot shows the optimization trajectories as mean and variance for four randomly initialized barren plateau circuits with nine qubits and five layers.
QNG seems to perform similarly to Adam in terms of minimizing the quantum circuit.
The code to reproduce the figure can be found here.
I would appreciate it if someone could help me figure out if there is a mistake in the implementation or a conceptual mistake. Does someone have similar experiences with QNG?
pennylane version: 0.24.0
 J. Stokes, J. Izaac, N. Killoran, and G. Carleo, Quantum
Natural Gradient, Quantum 4, 269 (2020).
 J. R. McClean, S. Boixo, V. N. Smelyanskiy, R. Bab-
bush, and H. Neven, Barren plateaus in quantum neural
network training landscapes, Nature Communications 9,