I am attempting to create a Hermitian observable which is diagonal in the computational basis. I know the set of eigenvalues for the computational basis states (i.e. the spectrum), and therefore I have attempted to implement my custom observable using qml.Hermitian() using the following code
H = qml.Hermitian(np.diag(spectrum), wires = range(n_wires))
This seems to work, but calculating the expectation value of this observable in a circuit seems to take a huge amount of time. Is there a more clever way to create the observable which improves the performance?
I can’t replicate this on my end. Do you have an example that we can use to investigate the issue?
You might be able to convert it to a Pauli sentence first (using qml.pauli.pauli_decompose ) and see if that helps.
It might also be the device you’re using. default.qubit is good when using less than 10 qubits. Between 10-20 qubits you may have better performance with lightning.qubit.
Our team has been working on end-to-end sparse state support, which might help you since your Hamiltonian is sparse. This is ongoing work though so I’ll check to see where we’re at.
I.e. I am first creating a vector containing the eigenvalues of my custom observable, then creating the full 2^N x 2^N matrix corresponding to the observable in the computational basis (which is a diagonal matrix), and then finally creating the corresponding Pennylane hermitian observable using qml.Hermitian().
I suppose the inefficiency occurs due to the fact that I am creating a very large matrix, with only elements being non-zero on the diagonal. I will attempt to see if a Pauli decomposition, as you suggested, might be possible to help speed things up.
Let me know if you happen to come up with some other ideas on how this can be done more efficiently.
I ran it on Google Colab and here’s what I got as a result:
The slowest run took 10.27 times longer than the fastest. This could mean that an intermediate result is being cached.
1.79 ms ± 1.7 ms per loop (mean ± std. dev. of 7 runs, 100 loops each)
Then I ran it on my computer and here’s what I got:
550 μs ± 20.1 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
So in both cases this ran really fast for me. My guess is that there’s something else in your code that’s making it slow.
I hope you can use this to further investigate why it’s slow. You can also use profilers such as snakeviz if you prefer.
Let us know if you find the cause or any more info on the slowdown!