The QNode wrapper is around a circuit which needs to be executed repeatedly in a larger non-qnode objective function. Something like objective = lambda xs : sum([circuit(x) for x in xs]). When I call grad on this, it is still quite happy to run, but this looks like it is outside of the scope of the docs, should this work, or do I need a big refactor?

I’m not sure what the arguments to the _gradient_function should be, and everything I’ve tried throws an error (which could be closely associated with issue 1.)

The approach to my overall task was to have two devices, one analytic and one noisy, and then pull the true gradient out for some fixed circuit parameters using qml.grad, and comparing that to the distribution of noisy parameters. This circuit can be optimized w.r.t. my objective function using a qml.Optimizer(), and so the gradient is being correctly computed somewhere, if we struggle too much with qml.grad is there access to the Optimizers internal gradient function for some set of params? Thanks!

When I call grad on this, it is still quite happy to run, but this looks like it is outside of the scope of the docs

The qml.grad() function is a wrapper around autograd.grad(), which provides the autodifferentiable NumPy module that comes with PennyLane. So it natively supports classical gradients as well as quantum gradients!

However, it may be best to use NumPy functions where possible, so np.sum instead of sum.

I’m not sure what the arguments to the _gradient_function should be, and everything I’ve tried throws an error

The arguments to the gradient function should match exactly the cost function.

Thanks @Tom_Bromley and @josh, that has definitely helped a lot, and pushed me a couple steps further!

I’m now encountering a slightly different behaviour which is equally disconcerting. I’ll dump the whole code here: https://pastebin.com/AvwTtSxq

The behaviour that I’m finding is that despite the analytic device being specified with analytic=True, and with or without shots (or even a ridiculous number of shots), I find that calls to the analytic_gradient function produce different answers each time, despite the whole code being (in my eyes) deterministic.

If instead of using qiskit.aer for the analytic device I use default.qubit, this error goes away, but I’m now a little lacking in confidence on how the analytic=True or shots parameter is being picked up by aer. Is it a fair comparison between the analytic gradient as identified by default.qubit and the ‘noisy’ gradient found using a noise-model within qiskit.aer.

Thanks for the help guys, let me know if this question isn’t well-formed.

This is because, by default, the qiskit.aer plugin uses QASM simulator backend (backend="qasm_simulator"). The QASM simulator does not support exact expectation value computation, since it only outputs stochastic samples.

In fact, you should be receiving a warning message for using analytic=True with the QASM simulator:

>>> analytic_dev = qml.device("qiskit.aer", wires=n_wires, analytic=True, shots=1000)
UserWarning: The analytic calculation of expectations, variances
and probabilities is only supported on statevector backends, not on
the qasm_simulator. Such statistics obtained from this device
are estimates based on samples.

Let me know if this warning message didn’t appear, there could be a bug in the PennyLane-Qiskit plugin!

To use analytic mode, you will need to instead use the statevector_simulator backend:

@josh, thanks so much for getting back. It looks like what I’m trying to do is perhaps a little less ‘native’ than I was hoping.

Sorry that this is now looking less like a PennyLane question but you might have some insight nonetheless. Is there an approximately easy way to compute the gradient with only the impact of noise. That is, to have something that computes the density matrix of the introduced noise model, but doesn’t include any sampling error. The ‘statevector_simulator’ with analytic=True won’t let me feed it a noise model, but I can’t decouple the effects of shot noise and environmental noise in QASM. Any ideas?

P.s. I don’t get that warning message, but the device instantiation is embedded within a function and so maybe the warning doesn’t propagate out, or that I’m not using the most up to date PennyLane version.

Hey @milanleonard! It’s great that you mentioned this, since it’s something that we are working on adding to core PennyLane: this and related additions provide a mixed state simulator default.mixed and access to noisy channels. The default.mixed device should let you work in analytic mode but switch to a mixed-state representation.

These features are quite new: we still need the final addition to be merged. Once done, you’d be able to access using