Parallel training of samples in a batch

Hi PennyLane Team,

I’m trying to train a hybrid classical-quantum network for MNIST classification using the pennylane and pytorch. I’m wondering if there is a way to parallel process samples in a batch to speedup the processing.

It seems to me that samples in a batch are processed sequentially as in pennylane/qnn/

    def forward(self, inputs):  # pylint: disable=arguments-differ
    """Evaluates a forward pass through the QNode based upon input data and the initialized

        inputs (tensor): data to be processed

        tensor: output data
    if len(inputs.shape) == 1:
        return self._evaluate_qnode(inputs)

    return torch.stack([self._evaluate_qnode(x) for x in inputs])

Hi @hanruiwang,

Welcome to the forum and thank you for your question! :slightly_smiling_face:

Indeed, batches of samples are processed sequentially. As you suggest, the ability to do that parallelly would be great to have and we would like to look into adding this in the future.

One thing to note is that some devices allow batched execution when computing gradients (e.g., the PennyLane-Braket Plugin) to execute quantum circuits parallelly.

Hi @antalszava,

Got it. Thanks a lot!