I was wondering how to use the Pytorch interface the best way in order to apply a QNode on multiple inputs of a dataset X,Y. I was thinking something like :
@qml.qnode(dev,interface="torch")
def circuit(parameters, x):
# do some quantum operations
# return an expectation value giving probability
# output in a binary supervised learning setup
return qml.expval(qml.PauliZ(0))
def apply_loss(labels, predictions):
# define loss
return loss
def cost(var, X, Y):
predictions = ?
# define cost
To define the cost, would we have to loop over all examples in X and Y to get their predictions? I guess since batching is not available yet right? Yet how to do so if we define X,Y to be tensors in order to apply our optimizer on the variable var ?
Can you provide a bit more info about what you’re thinking so we can give the best answer? Are you trying to break a dataset into batches and feed those into the circuit function? Or are you trying to build a cost function involving multiple elements from the dataset?
Sure. I am trying to define a cost function with the elements (x_i \in X), i=1..n from the input dataset X with n elements. Let us say the function is the binary cross-entropy. The quantum circuit would take as input each element, encode it as parameters of some gates, then apply other gates with the attribute parameters which I optimize onto.
I would also be interested in doing my optimization in batches. But I think this is done by inputing a batch as X and do optimiser.step() feeding it the batch right?
Sure for that part but how to then apply the cost on the batch input instances? Do I need to loop over each of them to get the output of the quantum node for the loss? I am also asking for the best way to do so with the pytorch interface.
Since none of the underlying simulators/hardware devices support batching, unfortunately you’ll still have to provide a batch of inputs manually (e.g., with a for loop).
A more intensive option (taken by fellow user @rooler) is to manually create a simulator which automatically handles batching (see here).
We do have on our roadmap to provide batching support in PennyLane. This would require either that a backend provider exposes this feature, or that we build our own support in PennyLane’s own simulators. But for the moment, you’ll have to use the old-fashioned option mentioned above.