Hi! I successfully reproduced the demo GQE Training.
I’m interesting why the parameters for generating the excitation pool are formed exactly as follows: op_times = np.sort(np.array([-2**k for k in range(1, 5)] + [2**k for k in range(1, 5)]) / 160)
?
Where did 160 and powers of two come from, how is it tied to the H2 molecule that the GPT-QE was trained on?
Thank a lot in advance
Hi @aaanikit , welcome to the Forum!
The powers of two and the 160 come from the main reference paper. Kouhei Nakaji et al. , “The generative quantum eigensolver (GQE) and its application for ground state search”. arXiv:2401.09253 (2024)
In section 3. Results, in page 9, you’ll see the equation for T which includes the powers of two and the 160.
My colleague Joseph, one of the authors of the demo, shared some additional info:
The powers of 2 have just became a convention now. Historically, it might have been for memory efficiency and accurate calculations (as discussed here). But in this case, there doesn’t seem to be a deeper meaning to it. It looks like they just wanted log-spaced points for op_times
and the 160 is just there to give good endpoints to the interval. Specifically, we get:
op_times = [-0.1, -0.05, -0.025, -0.0125, 0.0125, 0.025, 0.05, 0.1]
It doesn’t look like there’s any special meaning in their choice for op_times
from a physics point of view when calculating
double_excs = [qml.DoubleExcitation(time, wires=double) for double in doubles for time in op_times]
single_excs = [qml.SingleExcitation(time, wires=single) for single in singles for time in op_times]
in the demo.
I hope this helps clarify things!