# Jacobian function doesn't derive as expected

TL;DR:
My problem is that I use `qml.jacobian` to calculate the spatial derivative of an amplitude-encoded circuit, but the output of the jacobian is returning the original vector shape and not the derivative.

Details of this example:
In this example, I have a circuit (`circuit_state_prep`) where I take a normalized sine and encode into a quantum state using amplitude encoding, and sample the probabilities (should have sine shaped probabilities).
Afterwards I take the `sqrt(probs)` to get the quantum state (`|psi>`).
My goal is to get the spatial derivative of the input, sine in this case, so I apply the Jacobian to the circuit and the take only it’s diagonal to calculate the derivative in each point (each probability), which yields a sine shaped results instead of cosine shaped as expected, as can be seen in the figure below.

``````#pennylane imports
import pennylane as qml
import pennylane.numpy as pnp

# other imports
import matplotlib.pyplot as plt
import numpy as np
``````
``````nqubits = 5
dev = qml.device("default.qubit", wires=nqubits)
range_qubits = [i for i in range(nqubits)]
sin=np.sin(x)
sinnorm=sin/np.linalg.norm(sin)
cos=np.cos(x)
cosnorm=cos/np.linalg.norm(cos)
``````
``````@qml.qnode(dev)
def circuit_state_prep(sine_shaped_input_state):
qml.StatePrep(sine_shaped_input_state, wires=range_qubits)
prob=qml.probs(wires=range_qubits)
return prob

print(qml.draw_mpl(circuit_state_prep)(sinnorm))
``````
``````prob = circuit_state_prep(sinnorm) # circuit is only state_prep for this check
psi = pnp.sqrt(prob)
d_psi_dx_vec_jac = qml.jacobian(circuit_state_prep)(sinnorm) # spatial derivative of psi
d_psi_dx_vec = pnp.array([d_psi_dx_vec_jac[i,i] for i in range(len(d_psi_dx_vec_jac))]) # vector of the diagonal elements of the jacobian
plt.figure(figsize=(9, 6))
plt.plot(d_psi_dx_vec) # expected to be same as cosnorm
plt.plot(cosnorm, '--') #f = cosnorm
plt.plot(psi)
plt.plot(sinnorm, 'r:')
plt.legend(["d|psi>/dx vec", "cosnorm (expected derivative)", "|psi>", "sinnorm"])
plt.show()
``````

The circuit draw:

Name: PennyLane
Version: 0.36.0
Summary: PennyLane is a cross-platform Python library for quantum computing, quantum machine learning, and quantum chemistry. Train a quantum computer the same way as a neural network.
Home-page: GitHub - PennyLaneAI/pennylane: PennyLane is a cross-platform Python library for quantum computing, quantum machine learning, and quantum chemistry. Train a quantum computer the same way as a neural network.
Author:
Author-email:
Location: /home/cudaq/.local/lib/python3.10/site-packages
Requires: appdirs, autograd, autoray, cachetools, networkx, numpy, pennylane-lightning, requests, rustworkx, scipy, semantic-version, toml, typing-extensions
Required-by: PennyLane-qiskit, PennyLane_Lightning, PennyLane_Lightning_GPU, PennyLane_Lightning_Kokkos

Platform info: Linux-5.15.146.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
Python version: 3.10.12
Numpy version: 1.26.4
Scipy version: 1.13.0
Installed devices:

• lightning.qubit (PennyLane_Lightning-0.36.0)
• lightning.kokkos (PennyLane_Lightning_Kokkos-0.36.0)
• lightning.gpu (PennyLane_Lightning_GPU-0.36.0)
• default.clifford (PennyLane-0.36.0)
• default.gaussian (PennyLane-0.36.0)
• default.mixed (PennyLane-0.36.0)
• default.qubit (PennyLane-0.36.0)
• default.qubit.jax (PennyLane-0.36.0)
• default.qubit.legacy (PennyLane-0.36.0)
• default.qubit.tf (PennyLane-0.36.0)
• default.qubit.torch (PennyLane-0.36.0)
• default.qutrit (PennyLane-0.36.0)
• default.qutrit.mixed (PennyLane-0.36.0)
• null.qubit (PennyLane-0.36.0)
• qiskit.aer (PennyLane-qiskit-0.36.0)
• qiskit.basicaer (PennyLane-qiskit-0.36.0)
• qiskit.basicsim (PennyLane-qiskit-0.36.0)
• qiskit.ibmq (PennyLane-qiskit-0.36.0)
• qiskit.ibmq.circuit_runner (PennyLane-qiskit-0.36.0)
• qiskit.ibmq.sampler (PennyLane-qiskit-0.36.0)
• qiskit.remote (PennyLane-qiskit-0.36.0)

Thanks!

I think that my problem here i that the jacobian’s differentiation only applies to the parametrized gates and I can’t get the spatial derivatives using it.
I will just use `Numpy's.gradient()` on the `prob` output.

if there is still a better way I’ll be glad to hear about it.

Hi @ZivChen ,

You’re right that the Jacobian is only differentiating with respect to the trainable parameters. Would maybe some of the higher order derivatives or utility functions here help you?