in this notebook we have four layers. layers are determined by the number of rows in the weights.
my question is in classical neural networks the loss of each layer is calculated based on the previous one “backpropagation”, so in this example, the layers are arranged sequentially so how does the autograd function apply backpropagation in this context?
@josh
Hi @kareem_essafty. In PennyLane, the autograd library (or PyTorch, or TensorFlow, depending on the interface chosen) performs the classical backpropagation — that is, the backpropagation through the classical parts of the computation.
When the backpropagation arrives at a quantum component such as the QNode, PennyLane then takes over, and queries the device directly to determine the quantum gradient. We cannot do a ‘quantum backpropagation’ through the quantum circuit, as we do not have the ability to ‘view’ the quantum state; the only output we receive are expectation values.
Note that the above assumes the QNode is a hardware device. However, if the QNode is instead a simulator, it is possible to perform backpropagation directly through the quantum simulation. We are working on bringing this ability to PennyLane soon