I was following your tutorial about variation classifiers and testing different optimizers. I was wondering if it’s possible to optimize the weights of my VQC accordingly to a loss function using QNGOptimizer.
I’m asking this because I’m doing a hyperparameter optimization on high-energy physics data, and I felt that not every optimizer can optimize a loss function per se. Right now, I’m just considering the Adam Optimizer. Is there a quantum-specific optimizer with such capabilities?
Thank you so much in advance!
Here’s my code.
(which is the same as the tutorial, but replaced the NesterovMomentumOptimizer by QNGOptimizer, and obviously got an error ).
Please note you need to download the parity.txt file.
We do have several optimizers that are specific to quantum optimization, such as the
RotoselectOptimizer , and
Note that with the QNGOptimizer you have some limitations as explained in the note in the QNGOptimizer docs. If you want to use this optimizer I would suggest trying out realizing the objective function for the optimizer as an ExpvalCost object.
Some of the other optimizers also have some limitations so be sure to read the docs for them if you want to use them. You will find several examples there which may be helpful too.
Please let me know if this helps or if your problem persists!
On the QNGOptimizer for example, I don’t really understand how one can update the VQC weights by comparing the output of the circuit with the label of the data.
On the documentation, no data is used to update the parameters of the network… Can you please send a simple code snippet using the QNG Optimizer and a simple dataset for binary classification?
Hi @Miguel_Cacador_Peixo, I now see what you mean. It’s definitely not trivial. I’ll try to figure it out and get back to you.