Hello,
I am trying to get some examples (specifically, variations of the variational quantum classifier) working on IBM hardware. I am managing to send jobs to the simulated hardware, and “real” stuff too. However, the issues I am finding is that training time is extremely long.
I am assuming therefore, at the moment, it is only feasible to train very small models (~1000 training points, very small network architectures). Is this a correct assumption?
Does anyone have any suggestions about how to improve the time it takes to train a model (the value I should use for shots, max amount of data, max network size, etc). Is it worth sacrificing using validation data (I do not mean not using test data) in your training loop to reduce time training time? Should I view this as a transfer learning-esque problem, where I train a network classically, then add a very small quantum circuit to the end, to take some pressure of the quantum network and hopefully reduce training time?
I am just curious to see how people are getting around what seems to be long network training times, even using ibmq_qasm_simulator.
Thanks! I’m forward to using the package more.
EDIT:
To be more specific, I have a variational quantum classifier with 2 quibits and 3 layers. I tried training for only 1 epoch, on just 1 piece of data. In this example, it sent ibmq_qasm_simulator around 36 jobs and took around 230 seconds. This is just for the optimisation step as well. Is this the sort of numbers I should be expecting to see? Is there a way, by hand, I can calculate how many jobs it will send?