I am trying to compose the template RandomLayers into a random neural network architecture. I have created a class lets call it MyCircuit, and that class inherits from nn.Module. I want the forward function to be implemented as a quantum function so that I may just call backward through the entire model. I am running into a problem while trying to assign a device to the qnode representing the forward function and would like to understand the best practices regarding end-to-end differentiable hybrid models.
So my question is:
How should one go about maintaining a device through class compositionality?
MyModel has a layer that is a MyCircuit. I instantiate MyModel into my_model by passing in hyperparameters.
I want to be able to pass the device through the instantiation of my_model so it may be used for the layer which is a MyCircuit.
Generally though the class hierarchy can be much more complicated. I suppose this question is related to the device management of GPU’s TPU’s, and now QPU’s.
The current problem I am running into by passing a device in through the instantiation of my_model is that somewhere between that pass, and the forward method call the device is no longer recognized as a valid PennyLane device.
In devising this question, I came to a similar thread, why not just have a default.qubit as the device with a number of wires defined in the function itself? While it wouldn’t completely solve the problem of passing devices through composable classes it would be more dynamic and in my mind would lead to quicker prototyping of circuits.
I use pytorch.
Incentive of being able to pass device down through class stack, more levels of abstraction become possible.
Thank you for your time in reading and responding.