Python: Python 3.7
PennyLane: 0.4
Scipy: 1.3.0
Numpy: 1.16.4
Hi there,
I am currently experimenting quantum neural networks in PennyLane + Tensorflow. When profiling my code I found that subsequent calls to RotX
, RotY
and RotZ
in pennylane.plugins.default_qubit
were taking up a large part of the gradient computation.
In PennyLane these rotations are implemented with the the Scipy expm()
function. However, I think there is a more efficient way to compute these exponentials. By making use of Euler’s Identity (I have the proof somewhere if required) we can rewrite the matrix exponentiation as
\begin{align} \exp(a \vec{n}\cdot \vec{\sigma}) &=I\cosh{(a)}+ \sinh{(a)}(\vec{n}\cdot \vec{\sigma}) \end{align}
which implies
\begin{align} \exp(a \sigma_i) &=I\cosh{(a)}+ \sinh{(a)}\sigma_i\\ \exp(ia \sigma_i) &=I\cos{(a)}+ i\sin{(a)}\sigma_i \end{align}
This form of the matrix exponent of is computationally much cheaper than the one from Scipy, which implements a Pade approximation. This is demonstrated by the following code:
from scipy.linalg import expm
import numpy as np
import time
import matplotlib.pyplot as plt
I = np.array([[1,0], [0,1]]) #: Pauli-X matrix
X = np.array([[0, 1], [1, 0]]) #: Pauli-X matrix
Y = np.array([[0, -1j], [1j, 0]]) #: Pauli-Y matrix
Z = np.array([[1, 0], [0, -1]]) #: Pauli-Z matrix
# pennylane.plugins.default_qubit implementation
def Rot_pennylane(theta):
return expm(-1j * theta/2 * X)
# implementation with Euler's formula
def Rot_euler(theta):
return np.cos(theta/2) * I + 1j * np.sin(-theta/2) * X
# assert wheter or not these methods return the same matrices for all angles of theta
close = []
for x in np.linspace(0,2 * np.pi, 100):
close.append(np.all(np.isclose(Rot_pennylane(x), Rot_euler(x))))
print("Are the methods equivalent? {}".format(all(close)))
deltat_pl = []
deltat_euler = []
num_its = int(10 ** 3)
# gather statistics for average runtime for PennyLane implemenation
for i in range(num_its):
start = time.time()
_ = Rot_pennylane(4 * np.pi * np.random.rand() - 2 * np.pi)
end = time.time()
deltat_pl.append(end - start)
# gather statistics for average runtime for Euler implemenation
for i in range(num_its):
start = time.time()
_ = Rot_euler(4 * np.pi * np.random.rand() - 2 * np.pi)
end = time.time()
deltat_euler.append(end - start)
# print statistics
print("Pennylane average: t = {:.2E} sec".format(np.mean(deltat_pl)))
print("Euler average: t = {:.2E} sec".format(np.mean(deltat_euler)))
print("Speedup factor: n = {:1.2f}x".format(np.mean(deltat_pl) / np.mean(deltat_euler)))
# plot runtime distributions
fig, ax = plt.subplots(1,1)
ax.hist(deltat_pl, label='PennyLane = $\mu=${:.2E}'.format(np.mean(deltat_pl)))
ax.hist(deltat_euler, label='Euler = $\mu=${:.2E}'.format(np.mean(deltat_euler)))
plt.legend()
ax.set_yscale('log')
ax.set_xscale('log')
ax.set_xlabel('$\log_{10}$(time) (s)')
ax.set_ylabel('$\log_{10}$(N)')
ax.set_title('Performance comparison of matrix exponentiation')
plt.show()
Which shows a >30x speedup.
Are the methods equivalent? True
Pennylane average: t = 2.62E-04 sec
Euler average: t = 8.58E-06 sec
Speedup factor: n = 30.57x
Do you agree with my observations?
Regards,
Roeland