I’m calling a Pennylane circuit multiple times in a for loop (example [variational_circuit(x_i) for x_i in minibatch], and I’d like to vectorize this operation to something like variational_circuit(minibatch), similar to how Pytorch can predict an entire batch at once.
I’m using the qulacs gpu simulator, is there a way that this can be done?
I suppose it depends what you mean by vectorization.
If you are referring to just a UI/ability/function in PennyLane to more easily automate (and potentially parallelize) the process of specifying batch dimensions, then this currently doesn’t exist, but it is something on our radar to add! You should see some more movement on this coming up in the next few releases
Alternatively, if you mean vectorize the underlying simulation, so that the variational circuit is simulated using a batch of wires/parameters — and avoiding a slow Python for loop altogether — this is a really cool idea, and something we could also look at supporting in some form.
Thanks for the quick response! I’m interested in both, actually, but yes, my fundamental problem is that the for loop is deathly slow.
I’m trying to compare a classical WGAN-GP to a WGAN-GP where the generator is a hybrid quantum-classical network, and the hybrid quantum-classical network is much, much slower to train than the classical one even though I’m really not using that many quantum parameters and wires. I suspect that the problem is that I loop over the mini-batch in the hybrid quantum-classical model, whereas the mini-batch operation is implemented as a single matrix vector type computation in the classical case. It would be amazing if something could be done to fix this in simulations.
I suspect that the problem is that I loop over the mini-batch in the hybrid quantum-classical model, whereas the mini-batch operation is implemented as a single matrix vector type computation in the classical case. It would be amazing if something could be done to fix this in simulations.
Yep, this is exactly the case. Unfortunately, a limiting factor at the moment is the lack of simulators that natively support batching (of either gates or parameters). Something we can work on and support in future versions of PennyLane.
Hey, @psansebastian. It’s been a while since this discussion, and I’m happy to say parameter broadcasting is now definitely a thing in PennyLane! We’d be happy to help if you get stuck with it anywhere — and we’d love to know how you’re using PennyLane so we can keep improving it.
We have a very small survey for PennyLane v0.32, and it would be awesome if you’d give us some feedback and tell us about your needs. Thank you!