Recently I found that Parameter Broadcasting is one of the most useful things while running the QML method with a huge batch of data, and I wondered whether I could apply Parameter Broadcasting on two different dimensions (or called batch size for input_feature).
For instance, I have a simple Quantum Circuit:
import pennylane as qml
import numpy as np
input_qubit = 4
dev = qml.device("default.qubit", wires=input_qubit)
@qml.qnode(dev)
def circuit(input_feature, weights):
qml.AmplitudeEmbedding(features=input_feature, wires=range(input_qubit), normalize=True, pad_with=0.)
qml.RY(weights[0], wires=0)
qml.CNOT(wires=[0,1])
qml.RY(weights[1], wires=0)
return qml.probs(wires=range(input_qubit))
And the data shape is given like this:
input_size = 2**input_qubit
input_batch = 500
ry_size = 2
ry_dim = 10
# input_feature: (input_batch, input_size)
input_feature = np.random.rand(input_batch,input_size)
# ry_weight: (ry_size, ry_dim)
ry_weight = np.random.rand(ry_size, ry_dim)
Next, I apply Parameter Broadcasting depending on input_batch
(the input dimension) / ry_dim
related to the function method_1()
/ method_2()
below. From these two methods, we can compute each ry_dim quantum circuit output result for all input_feature in the batch, then sum the whole ry_dim result for each input_feature, and we will have the final_output with dim=(batch_size, input_size).
We can get the same result from these two methods. And the question is, could I apply Parameter Broadcasting on two different dimensions without using for loop in the function?
def method_1():
for ry_index in range(ry_dim):
# output_tmp: (input_batch, input_size)
output_tmp = circuit(input_feature, ry_weight[:,ry_index])
# final_output: (input_batch, input_size)
if ry_index == 0:
final_output = output_tmp # for the first time
else:
final_output = final_output + output_tmp
return final_output
def method_2():
for input_index in range(input_batch):
# output_tmp_1: (ry_dim, input_size)
output_tmp_1 = circuit(input_feature[input_index], ry_weight)
# output_tmp_2: (input_size)
output_tmp_2 = np.sum(output_tmp_1, axis=0)
# final_output: (input_batch, input_size)
if input_index == 0:
final_output = np.expand_dims(output_tmp_2, axis=0) # for the first time
else:
output_tmp_2 = np.expand_dims(output_tmp_2, axis=0)
final_output = np.concatenate((final_output, output_tmp_2))
return final_output
method_1_output = method_1()
method_2_output = method_2()
print(method_1_output.shape)
print(method_2_output.shape)
print((method_1_output==method_2_output).all())