Hi PennyLane Team,
I’m trying to train a hybrid classical-quantum network for MNIST classification using the pennylane and pytorch. I’m wondering if there is a way to parallel process samples in a batch to speedup the processing.
It seems to me that samples in a batch are processed sequentially as in pennylane/qnn/torch.py:302.
def forward(self, inputs): # pylint: disable=arguments-differ
"""Evaluates a forward pass through the QNode based upon input data and the initialized
weights.
Args:
inputs (tensor): data to be processed
Returns:
tensor: output data
"""
if len(inputs.shape) == 1:
return self._evaluate_qnode(inputs)
return torch.stack([self._evaluate_qnode(x) for x in inputs])