Variational Classifiers and QNGOptimizer

Hey @andrew,

If you’re interested, I have found an edge case caused (I think) if 0 is in x. Then, g can contain a 0 in the diagonal, and np.linalg.solve will fail. I’m working around this by just continuing -ing over calculation if this condition of if 0 is in x is met, though I plan on either finding a different way to compute it or slightly changing each 0 value, shifting it slightly into a positive number.

Good spot! Yes I believe there can be issues with inverting the metric tensor and the QNGOptimizer provides an optional lam argument to regularize the matrix.

I am curious about how you are able to call overall_grad = quantum_grad(theta, x) inside of cost_ng . I was under the impression that a gradient cannot be calculated inside of a cost function unless you move to the PyTorch backend (which I have done for another thing I’m trying) - does this not apply here as this isn’t the cost function, but the gradient function itself?

Yes, that constraint comes from Autograd, which is the default interface in PennyLane. However, when we feed a callable function to the grad_fn argument of PennyLane optimizer methods, we are actually skipping the need to evaluate the gradient using Autograd, since the optimizer can simply call the function passed through the grad_fn argument. You can see this by checking out for example the source of the GradientDescentOptimizer.

I see. So, if I moved to an architecture/ training setup where batch_size=0 and I removed the bias term, this would be considered a purely qNG problem?

PennyLane should be good with anything that is directly the output of a QNode or VQECost object. If we have classical postprocessing, things get a bit more complicated. So yes, in this case I believe that removing the bias, batch average, and squared error postprocessing should be good. However, this also makes the model more restricted.

1 Like