Accessing samples from qml optimizers

Hi,
One can run ExpValCost() and then access the produced samples using device._samples.

In order to compute a gradient, qml optimizers perform 2 evaluations.
For example, in the case of 1-D parameter shift rule, we evaluate the circuit for theta + s and theta - s where theta is a scalar parameter and s is a scalar offset.

I would like to store the samples that were generated for theta + s and theta - s. How can I do this?

I imagine it should be something like

evaluate the function for theta + s
samples_1 = device._samples
evaluate the function for theta - s
samples_2 = device._samples

However, I don’t want to compute samples again, I want to reuse samples that resulted from the gradient computation.

Hi @Einar_Gabbassov,

If I understand it correctly, you wish to run an optimization using the parameter-shift rule, and get the intermediate samples generated by it at the same time. Is that correct?

Unfortunately, I believe that might be difficult without changing the implementation of it since all the calculations and processing of the the samples happen internally, and aren’t stored anywhere. The only solution I can think of would be to implement your own parameter-shift gradient, store the samples generated for each shift, and then supply the gradient function to the optimizers as a custom gradient via the grad_fn keyword in the optimizer’s step method. I will double-check with some of my colleagues to see if there might be another work-around for this.

Let me know if you have any more questions regarding this (or any other issues or thoughts you might have).

Hi Theodor,
Thank you for your reply.
Yes, I want to get intermediate samples (generated binary strings) from the gradient computation.

I think, it is a very useful feature for reusing samples for other optimization purposes.

Hi @Einar_Gabbassov! Yes, as @theodor mentioned, this is currently not possible using the built-in gradient logic in PennyLane. However, this is a useful feature to add, and something we will take into account while improving the gradient rules.

In the meantime, I think your main option would be to hand-code in the gradient rule. For example, you could have a function that accepts a QNode, and returns both parameter-shift samples and the gradient (the mean of the samples).

1 Like