Alternative for Hermitian observable on lightning.gpu device?

Hi everyone, I’m trying to follow the kernel based training demo, but I want to use the lightning.gpu device since my use case is computationally more demanding. I found out that, unfortunately, this device does not support the Hermitian observable:

    984 
    985                 if not self.supports_observable(observable_name):
--> 986                     raise DeviceError(
    987                         f"Observable {observable_name} not supported on device {self.short_name}"
    988                     )

DeviceError: Observable Hermitian not supported on device lightning.gpu

Is there an alternative for this observable that allows me to run my experiment on the lightning.gpu device?

Hey @PCesteban! Here is a link to what lightning.gpu supports: Lightning-GPU device — PennyLane-Lightning-GPU 0.31.0-dev0 documentation

It does support Hamiltonians — see qml.Hamiltonian. Are you able to write what you need as a Hamiltonian?

Hope this helps!

Hi @PCesteban

Unfortunately we do not enable the Hermitian observable support right now due to performance impacts at runtime in the adjoint gradient execution pipeline. Though, recent supports were improved for the v0.30.0 release. You can try upgrading all your packages with

python -m pip install pennylane --upgrade
python -m pip install pennylane-lightning-gpu --upgrade

If this does not work, feel free to drop a code sample, and we can examine if support for your use-case is possible with minor modifications.

1 Like

Hi @isaacdevlugt @mlxd,

Thank you for your response! :smile:. As I mentioned, I’m trying to implement the kernel based training demo with another dataset requiring a larger number of qubits. In the demo implementing the quantum kernel requires the measurement of a projector of the form |0..0\rangle \langle 0..0| which is done with qml.Hermitian. The code I was trying to run is:

n_qubits = 13
dev_kernel = qml.device("lightning.gpu", wires=n_qubits)

projector = np.zeros((2**n_qubits, 2**n_qubits))
projector[0, 0] = 1

@qml.qnode(dev_kernel, interface="autograd")
def kernel(x1, x2):
    """The quantum kernel."""
    AmplitudeEmbedding(x1, wires=range(n_qubits), pad_with=2)
    qml.adjoint(AmplitudeEmbedding)(x2, wires=range(n_qubits), pad_with=2)
    return qml.expval(qml.Hermitian(projector, wires=range(n_qubits)))

I would appreciate your suggestions if there is an alternative way to do this or another approach I could try.

I don’t have a GPU on hand right now :sweat:… can you see if this works?

n_qubits = 2
dev_kernel = qml.device("lightning.gpu", wires=n_qubits)

# projector needed is |00...0> <00...0|
op = qml.Projector(np.array([0]*n_qubits), wires=range(n_qubits))
H = qml.Hamiltonian([1.], [op])

@qml.qnode(dev_kernel, interface="autograd")
def kernel(x1, x2):
    qml.AmplitudeEmbedding(x1, wires=range(n_qubits), pad_with=2)
    qml.adjoint(qml.AmplitudeEmbedding)(x2, wires=range(n_qubits), pad_with=2)
    return qml.expval(H)

kernel([0.1, 0.2], [0.3, 0.4])

I’m not 100% certain that it will… But all I’m doing here is using qml.Projector and creating a qml.Hamiltonian with it.

Hi @isaacdevlugt,

This was the output:

ValueError                                Traceback (most recent call last)
<ipython-input-5-d58d2286c95a> in <cell line: 19>()
     17     return qml.expval(H)
     18 
---> 19 kernel([0.1, 0.2], [0.3, 0.4])

18 frames
/usr/local/lib/python3.10/dist-packages/pennylane/ops/qubit/hamiltonian.py in sparse_matrix(self, wire_order)
    393                 if len(o.wires) > 1:
    394                     # todo: deal with operations created from multi-qubit operations such as Hermitian
--> 395                     raise ValueError(
    396                         f"Can only sparsify Hamiltonians whose constituent observables consist of "
    397                         f"(tensor products of) single-qubit operators; got {op}."

ValueError: Can only sparsify Hamiltonians whose constituent observables consist of (tensor products of) single-qubit operators; got Projector(array([0, 0]), wires=[0, 1]).

Yep, as I expected :sweat_smile:. It might be best to use default.qubit for now :slight_smile:

Actually, one alternative might be to do a bit of post-processing. If all you want is the |00…0> coefficient, you can just access it via measuring qml.probs(wires=range(n_qubits)) and accessing the zeroth element:

@qml.qnode(dev_kernel, interface="autograd")
def kernel(x1, x2):
    """The quantum kernel."""
    qml.AmplitudeEmbedding(x1, wires=range(n_qubits), pad_with=2)
    qml.adjoint(qml.AmplitudeEmbedding)(x2, wires=range(n_qubits), pad_with=2)
    return qml.probs(wires=range(n_qubits))

print(kernel([0.1, 0.2], [0.3, 0.4])[0])

Does this fit your needs?

Yes!! This is perfect, thank you @isaacdevlugt.

1 Like

Lovely! Glad that this works :smile:. It’s not very efficient, as you’re calculating all 2^N probabilities when you really only need 1 :sweat_smile:. Just be aware of that!

Yes, I understand. I’m looking forward to more supports for the lightning.gpu device in the future. Thank you!!

1 Like

Happy to help @PCesteban ! :smiley:

1 Like

Hi @PCesteban

Having thought a little more about this, I may have a path that can give you a decent performance boost, without explicitly building the probs.

lightning.gpu explicitly supports qml.SparseHamiltonian through the adjoint differentiation pipeline (diff_method="adjoint"), which should be much faster than the current parameter-shift one used by default. You may be able to unlock this by taking the following path:

  • Create your Hermitian projector :h
  • Create a sparse CSR matrix from h using scipy.sparse.csr_matrix as:
import scipy.sparse as sp
h = qml.Hermitian(projector, wires=range(n_qubits))
sparse_m = sp.csr_matrix(h.data[0])
  • Finally, create a qml.SparseHamiltonian from the above CSR as:
sparse_obs = qml.SparseHamiltonian(sparse_m, wires=range(n_qubits))

Let me know if this helps. If not, at least there is a fall-back until we unblock the Hermitian support end-to-end.

2 Likes

Hi @mlxd,

This is great. Thank you so much. It worked perfectly fine.

1 Like

@mlxd coming through again :rocket:! Glad your problem is solved!

1 Like