Hybrid neural network speed suggestions

Hi @James_Ellis!

Since that post, we have invested some time improving the built-in default.qubit simulator, and managed to get an approximate two orders of magnitude speed improvement. So default.qubit will now be significantly faster than pyQVM :slight_smile:

With respect to the training time, the largest factor is the number of parameters in the quantum circuit. As PennyLane uses the parameter shift rule to differentiate quantum nodes in a hardware-friendly manner, the number of quantum evaluations required to compute the gradient of all p parameters scales as 2p\Delta t, where \Delta t is the time taken for one forward pass/quantum simulation.

Some suggestions for improving the speed of training:

  1. PennyLane always treats positional QNode arguments as differentiable, and keyword arguments as non-differentiable. You may see some speed improvement if you change q_in to be a keyword argument:

    @qml.qnode(dev, interface="torch")
    def q_net(q_weights_flat, q_in=None):
    
  2. You could try a high performance simulator, such as Qulacs. However, the PennyLane-Qulacs plugin is experimental, and needs more work to ensure its accuracy.

  3. Finally, a new experimental feature in the latest version of PennyLane is the PassthruQNode. Instead of using the parameter-shift rule, the PassthruQNode is simply a white box, passing tensors to a compatible simulator where classical backpropagation occurs.

    • This scales with only constant overhead compared to the parameter-shift rule, but is not hardware compatible.

    • The PassthruQNode currently only works with the default.tensor.tf simulator, coded in TensorFlow, so must be used with the TensorFlow interface.

    See this post for an example of the PassthruQNode being used.