According to your code below, every wires in every layers has a gate. But in the picture below at ParametrizedLayer_0, I see that RZ gates applied only on wire of 0 and 1. Is there a confliction between code and the picture?
Is it possible to use QNG algorithm as a supervised learning algorithm?
I mean, I have a dataset with 90K rows and each row has five columns as features and the sixth column is the target value. So in your opinion, how to modify the bellow “for loop” in the QNG example?
opt = qml.QNGOptimizer(0.01)
theta = init_params
for _ in range(steps):
theta = opt.step(circuit, theta)
Note that the quantum natural gradient is an optimization strategy, and not part of the model per se. You can use any optimization strategy for a supervised learning algorithm, it’s simply a matter of constructing the cost function appropriately.
For an example of a quantum supervised learning example, check out our quantum transfer learning tutorial. Here, we use the nn.CrossEntropyLoss cost function from PyTorch for training our model.
Something to be aware of, however, is that the QNG optimizer currently requires that the cost function be a linear combination of QNodes with the same ansatz. It’s possible to extend this to more complicated cost functions, and something we are currently working on!
In general, the coefficients of the Hamiltonian in the model are problem specific. For example, in VQE, you have a specific molecule or electronic structure that you wish to compute the ground state energy; this corresponds to a specific Hamiltonian with hardcoded coefficients.
You can check also check out our quantum chemistry demo for more details on solving electronic structure problems using PennyLane!
Having said that, there may be some problems/models where the exact Hamiltonian doesn’t matter too much, and the coefficients can be chosen randomly In such a case, any real value is allowed for the coefficients!