QNGOptimizer for PyTorch

Hi @Andre_Sequeira,

Please excuse the delay in my response.
I have found some issues with your code, which I will break into issues with the optimizer, and other issues.

Issues with the optimizer:

  1. The g.shape[0] is causing some errors. However group["lam"] in the optimizer is always zero so you can actually bypass this error by commenting the line g += group["lam"] * np.identity(g.shape[0]).

  2. g and grad are Torch tensors so you can’t use them with Numpy directly. You need to use g.detach().numpy(), and grad.detach().numpy().

  3. g is a torch tensor which contains a single value so np.linalg.solve returns an error.

  4. group["lr"] is also constant, I’m not sure whether or not this is what you expected.

In any case these issues will require you re-thinking the optimizer.

Issues with the circuit

  1. Your circuit must include an argument called “inputs”, which corresponds to the non-trainable parameters. It can be None, but it needs to be an argument of your qnode. For more details you can check out the documentation for qnn.TorchLayer.
  2. Because of the previous item, you will need to change the way you call your circuit in the closure and anywhere else.

Issues in the closure

  1. Your policy_estimator was a bit complicated so I used return loss, qml.qnn.TorchLayer(circuit, weight_shapes) for the return of the closure, where weight_shapes = {"params":4} if you’re using the example circuit here.

Given the issues with the optimizer I managed to make it run but it doesn’t optimize well.

I hope this helps you move forward. Please let me know if any of this isn’t clear and I can share the full code that runs.

Also please let me know if this is what you were looking for!