Hello, I am wondering if there is a way to speed up my kernel computation via GPU or Jax optimizations.
I am currently trying to run quantum support vector machine on cybersecurity samples to train for classification. The training of the kernel matrix takes many hours depending on how many samples I use (~1000) and I think I am using the GPU accelerations available but it does not speed it up much at all vs just using regular cpu.
Is there something I am doing wrong or missing? Thank you.
# Block 1: Defining My Kernel
import pennylane as qml
from pennylane import numpy as pnp
import jax
import jax.numpy as jnp # Use JAX numpy
n_qubits = n_components # This is currently 8
dev = qml.device("lightning.gpu", wires=n_qubits)
FEATURE_MAP_REPS = 2
def angle_encoding_feature_map(x):
qml.AngleEmbedding(features=x, wires=range(n_qubits), rotation='Z')
feature_map_to_use = angle_encoding_feature_map
@qml.qnode(dev, interface='jax')
def kernel_circuit(x1, x2):
feature_map_to_use(x1)
qml.adjoint(feature_map_to_use)(x2)
return qml.expval(qml.Projector([0] * n_qubits, wires=range(n_qubits)))
@jax.jit
def kernel_jit(x1, x2):
return kernel_circuit(x1, x2)
# Block 2: Computing the Kernel Matrix for train and test
X_train_np = jnp.array(X_train_balanced.values)
X_test_np = jnp.array(X_test.values)
print("Starting to compute training kernel matrix")
## Compute the training kernel matrix
kernel_train = qml.kernels.square_kernel_matrix(
X_train_np,
kernel=kernel_jit,
)
print("Starting to compute testing kernel matrix")
## Compute the testing kernel matrix
kernel_test = qml.kernels.kernel_matrix(
X_test_np,
X_train_np,
kernel=kernel_jit
)