Question on data parallelization

Hey @Daniel_Wang,

default.qubit.torch is indeed written using PyTorch. The thing to note here though is that it’s based on our old device API (default.qubit.legacy). For that reason, we don’t recommend using it unless there’s something you need that the old device API has and the new one (default.qubit) doesn’t. Using default.qubit will now seamlessly switch you to the right backend :slight_smile:.

On GPU parallelization with PyTorch, I’m not 100% sure here. I’ll have to ask internally! My suspicion is that you can’t directly parallelize PennyLane-Torch code with PyTorch (e.g., with nn.torch.parallel), but I will double check that.