I have a variational quantum circuit algorithm. The code is perfectly fine and the algorithm will converge for default.qubit. However, when I left all the code unchanged, just changing the setting from default.qubit to lightning.qubit will make the algorithm will not converge (loss will be oscillating). What could have caused this?
This is my setting
Name: pennylane
Version: 0.42.3
Summary: PennyLane is a cross-platform Python library for quantum computing, quantum machine learning, and quantum chemistry. Train a quantum computer the same way as a neural network.
Home-page:
Author:
Author-email:
License-Expression: Apache-2.0
Requires: appdirs, autograd, autoray, cachetools, diastatic-malt, networkx, numpy, packaging, pennylane-lightning, requests, rustworkx, scipy, tomlkit, typing_extensions
Required-by: pennylane_lightning
I see that you’re using an older version of PennyLane. The most recent stable version is PennyLane v0.44 so I would recommend using that one. You can use python -m pip install pennylane --upgrade to upgrade to this version.
If you’re having issues with the loss not converging the best place to look is in the algorithm or in the loss function itself. I would also recommend using a seed since the starting point for the algorithm can also have an impact in convergence.
If you’re still struggling to solve this issue you can share a minimal reproducible example here so that we can try to replicate the problem. Even the process of making a minimal reproducible example often leads to finding the solution!
If you’re getting errors please make sure to share the full error traceback too.
I hope these pointers help you in resolving the issue!
I have the newest version of peenylane now. The seed was set as constant. The problem is more like why the algorithm converges for default.qubit, but not for lightning.qubit?
It’s hard to say, especially without seeing the code. For example, you may have a problem in your algorithm where very small variations in the initial state can lead to completely different results. This is like dropping a ball at the very top of a mountain. Depending on the exact position that you drop it from it can land in a completely different place. E.g. if you look at this picture you’ll notice that starting from the blue regions would produce a clear result, while starting from other regions could lead to varying results.
Since default.qubit and lightning.qubit are built differently (one on Python and the other on C++) then there may be tiny differences that could be producing a different result for your particular problem.
My recommendation would be to first make sure that your problem doesn’t have an optimization landscape like the mountain, where small differences in the initialization can have a big impact in the result. Instead you want your optimization landscape to look more like a bowl, where small changes in the initialization don’t affect the result.
I understand this principle, but why the default.qubit converges while the lightning.qubit oscillates? They both start with the same initial conditions. If the problem is that my optimization landscape has lots of local minimums, shouldn’t default.qubit also oscillate? I don’t know why you are repeating this because I stated several times that the algorithm is set to be completely identical between the two, but the lightning.qubit failed to converge.
As for the algorithm, I was trying to reproduce the results of this paper: [2011.10395] Solving nonlinear differential equations with differentiable quantum circuits . I was able to reproduce the results under the default.qubit setting, but the algorithm doesn’t converge for lightning.qubit. I wanted to contribute my demos to the community, but it said that lightning.qubit is encouraged.
I didn’t mean to sound repetitive, the nuance here is that both devices have different code that build them, so even a tiny difference in the 8th decimal point can have an impact in the result you see. That’s what I meant to convey. So from your perspective the code is exactly the same, but when both devices “read” the code they may use different initializations and this can potentially be why you’re seeing different results.
It’s important to note that this is just an educated guess. Without seeing any code I can’t really test anything to see what the real problem could be.