How can I make variational learning faster?

I’m now evaluating the performance of quantum nerural networks in simulation.
However, the computiational time easily reachs to a few hours.
It is difficult to exploit the hyper parameter (e.g. num. of layers, qubits, gate type).
The condition is,

  • Lap-top PC
  • 2 qubits
  • 12 variational parameters (Two StronglyEntanglingLayers)
  • 2 dimentional input feature x 16384 data (corresponds to 1 epoch)
  • loss func. : MSE
  • lightning.qubit
  • diff_method = ‘adjoint’
  • optimizer: Adam (from qml.optimizers)

The computational time is approximately 100 sec /epoch.
Therefore, in case of ~100 epochs, the total time becomes 2~3 hours.

I have tested a lot of ways.

  • qulacs.simulator is fast, but the diff_method should be paramer-shift. Threrefore, it is not fast in case of gradient-descent based learning.
    To date, lightning.qubit would be fastest.

  • A fastest diff_method is adjoint. “Backprop” is not fast, at least, when the number of parameter is not so large.

How can I do for further speeding up?
“Using better machine as GPU cluster” is only solution?

Here is a sample code.

import pennylane as qml
from pennylane import numpy as np

num_of_data = 16384
X =  np.random.uniform(high=1, size=(num_of_data,2))
Y =  np.random.uniform(high=1, size=(num_of_data,1))

########  parameters#############
n_qubits = 2 ## num_qubit
n_layers = 2 # num_layer
dev = qml.device("lightning.qubit", wires=n_qubits, shots=None) # define a device

# Initial circuit parameters
var_init = np.random.uniform(high=2 * np.pi, size=(n_layers, n_qubits, 3))

@qml.qnode(dev, diff_method='adjoint')
def quantum_neural_net(var, x):
    qml.templates.AngleEmbedding(x, wires=range(n_qubits))
    qml.templates.StronglyEntanglingLayers(var, wires=range(n_qubits))
    return qml.expval(qml.PauliZ(0))
        
def square_loss(desired, predictions):
    loss = 0
    for l, p in zip(desired, predictions):
        loss = loss + (l - p) ** 2
    loss = loss / len(desired)
    return loss

def cost(var, features, desired):
    preds = [quantum_neural_net(var, x) for x in features]
    return square_loss(desired, preds)

opt = qml.AdamOptimizer(0.1, beta1=0.9, beta2=0.999)
import time

hist_cost = []
var = var_init
for it in range(50):
    t1 = time.time() 
    var, _cost = opt.step_and_cost(lambda v: cost(v, X, Y), var)
    t2 = time.time() 
    elapsed_time = t2-t1
    print("Iter:"+str(it)+", cost="+str(_cost.numpy()))
    print(f"Time:{elapsed_time}")
    hist_cost.append(_cost)

Iter:0, cost=[0.22944678] Time : 120.97000002861023 sec

Hi @Kuma-quant :slight_smile:

I’m running some profiling to check out what the bottlenecks are, but I have two ideas that might help.

First, try initializing the QNode with mutable=False. The structure of the circuit doesn’t change with you parameters, so you don’t need to rebuild the circuit each time.

@qml.qnode(dev, diff_method="adjoint", mutable=False)

Next, we have a release coming out tonight that will have about a 15% decrease in adjoint speed. So either get that release once it gets uploaded, or install PennyLane from source right now.

Hope those help :slight_smile:

Dear christina-san,

Thank you for kind advices!

I set the mutable option as False, and then I got approximately 15% speeding up.
Fantastic.

I also upgraded PennyLane to 0.16.0.
The speed looks improved a little.

I’ll check the bottlenecks in detail for further speeding up.

Thanks.