Hi,
I have a simple, diagonal Hermitian observable for my expval.
The memory and runtime for the example below seem like worse than O(N^2) for a diagonal matrix of size NxN. Specifically it will use more than 16GB ram for n = 15, N=2^n
Is there a way to efficiently encode this observable to get close to O(N) memory/runtime?
import pennylane as qml
from pennylane import numpy as np
n = 10
dev = qml.device('default.qubit', wires=n, shots=None)
@qml.qnode(dev)
def observable_test():
qml.PauliX(0)
h = qml.Hermitian(np.diag(np.arange(2**n)), wires=range(n))
return qml.expval(h)
observable_test() # this will get stuck for n=15 :(
Hey @Hayk_Tepanyan!
I have a couple initial recommendations that might help:

Encode your hermitian operator as a SparseHamiltonian
(qml.SparseHamiltonian — PennyLane 0.31.0 documentation)

Try lightning.qubit
— this is our more performant simulator : Installation — PennyLaneLightning 0.31.0 documentation
Let me know if this helps!
Thanks for the prompt reply @isaacdevlugt .
I have tried the SparseHamiltonian
with lightning.qubit
. Looks like the sparsity is well taken advantage of since the SparseHamiltonian is created fast and without using too much memory  I am able to create 28 qubit SparseHamiltonian
s with less than 5GB memory.
However running adjoint diff on it is very slow and memoryintensive.
The code below uses 30GB and takes ~15seconds to run, even though the csr_matrix itself is less than 1MB.
Any idea what is going wrong here?
import pennylane as qml
from scipy.sparse import csr_matrix
from scipy.sparse import diags
from pennylane import numpy as np
n = 15 # 30GB on lightning.qubit n=15
dev = qml.device('lightning.qubit', wires=n, shots=None)
@qml.qnode(dev, diff_method='adjoint')
def observable_test():
qml.PauliX(0)
h = qml.SparseHamiltonian(csr_matrix(diags(np.arange(2**n))), wires=range(n))
return qml.expval(h)
observable_test()
Ah! Lightning doesn’t have adjoint differentiation support for SparseHamiltonian
, and so it tries to build a matrix for what it receives. It can be added, but it isn’t an immediate priority for our development team. That said, I suggest that you put in a feature request so that we can at least have it logged somewhere — maybe enough people will request it and it gets bumped up the priority list!
LightningGPU and LightningKokkos currently support this, so I’d recommend using those until support is added. If this is something you’re interested in doing for the time being, we can even point you in the right direction to do so!
Hope this helps
1 Like
Thanks @isaacdevlugt .
Works great on lightning.gpu
1 Like