# Normally distributed random numbers using quantum circuits

I understand that creating normally distributed random numbers using quantum circuits is not straightforward due to the inherent probabilistic nature of quantum measurements, which follows a Born rule distribution rather than a normal distribution.

However, I used the well-known technique called Box-Muller transformation, which generates normally distributed numbers from uniformly distributed ones. Here’s a simple example using PennyLane to generate uniformly distributed random numbers:

``````import pennylane as qml
from pennylane import numpy as np
import matplotlib.pyplot as plt
from scipy.special import kl_div

n_qubits = 10
shots = 10000

dev = qml.device('default.qubit', wires=n_qubits, shots=shots)

@qml.qnode(dev)
def qnode():
for i in range(n_qubits):
return [qml.sample(qml.PauliZ(wires=i)) for i in range(n_qubits)]

def generate_normal_distribution_quantum():
samples = qnode()  # this gives you a list of numpy arrays
samples = np.concatenate(samples)  # concatenate arrays into one
samples = (samples + 1) / 2  # map the {-1, 1} outcomes to {0, 1}

# the random number generated from each qubit is now uniformly distributed in {0, 1}
uniform_random_numbers = samples.flatten()

u1 = uniform_random_numbers[::2]
u2 = uniform_random_numbers[1::2]

# if u1 contains zero, replace it with another random number
u1 = np.where(u1 == 0, np.random.uniform(1e-9, 1.0, size=len(u1)), u1)

# create normally distributed numbers from uniformly distributed ones using Box-Muller transformation
normal_random_numbers_quantum = np.sqrt(-2 * np.log(u1)) * np.cos(2 * np.pi * u2)

# remove NaN values if any
normal_random_numbers_quantum = normal_random_numbers_quantum[~np.isnan(normal_random_numbers_quantum)]

return normal_random_numbers_quantum

def generate_normal_distribution_numpy(size):
normal_random_numbers_numpy = np.random.normal(0, 1, size)
return normal_random_numbers_numpy

def calculate_kl_divergence(dist1, dist2, bins=50):
hist1, bins1 = np.histogram(dist1, bins=bins, density=True)
hist2, _ = np.histogram(dist2, bins=bins1, density=True)

# Calculate KL divergence
kl_divergence = kl_div(hist1+1e-8, hist2+1e-8).sum()
return kl_divergence

def plot_distributions(dist1, dist2, bins=50):
plt.figure(figsize=(10, 6))
plt.hist(dist1, bins=bins, density=True, alpha=0.5, label='Quantum Generated')
plt.hist(dist2, bins=bins, density=True, alpha=0.5, label='Numpy Generated')
plt.title('Comparison of normal distributions')
plt.legend()
plt.show()

def main():
quantum_dist = generate_normal_distribution_quantum()
numpy_dist = generate_normal_distribution_numpy(len(quantum_dist))

kl_divergence = calculate_kl_divergence(numpy_dist, quantum_dist)
print(f"KL Divergence: {kl_divergence}")

plot_distributions(quantum_dist, numpy_dist)

if __name__ == "__main__":
main()

``````

However, the plots do not make sense, can anyone take a look?

Hey @Solomon,

I’m not super familiar with this transformation, but a couple things about your implementation stand out as being slightly questionable for me:

1. `u1` and `u2` need to be uniformly sampled in the interval (0, 1). In your `generate_normal_distribution_quantum` function, `u1` and `u2` don’t appear to be that .

2. Related to the first point, but it makes more sense to me to have a quantum circuit that directly generates uniformly distributed samples in (0, 1) (or -1 to 1 with appropriate post-processing).

Best of luck!

Issac thanks,
Yes … this was late night programming

My goal here is to implement a Quantum Generative Adversarial Network (QGAN). A critical component of this endeavor is the ability to sample from a normal distribution. However, given the constraints of the available quantum random number generator, which only produces 1 and -1, there’s a necessity to create continuous random numbers between -1 and 1 (or 0 and 1). The question at hand is whether it’s feasible to train and optimize a Parametrized Quantum Circuit (PQC) to generate a Gaussian distribution with a zero mean. If this is possible, the corresponding code would be very beneficial. I saw this example: Quantum generative adversarial networks with Cirq + TensorFlow but, as far as I can see, it is NOT drawing samples from a true gaussian distribution.

I found this which was very informative: https://arxiv.org/pdf/1904.00043.pdf
But still requires an involved training scheme.

A critical component of this endeavor is the ability to sample from a normal distribution.

Are you talking about how the parameters are initialized? If so, I think `np.random.normal` should suffice . Maybe getting a quantum circuit to replicate this behaviour is a little overkill!

Hello Issac,
No, I’m referring to the distribution Z, from which the generator of the GAN pulls its input samples.

I found this in Julia, which does exactly what I want to do with pennylane:
https://docs.yaoquantum.org/dev/generated/examples/6.quantum-circuit-born-machine/index.html

Do you know of any successful attempt at this using PennyLane?
Thanks.

No, I’m referring to the distribution Z, from which the generator of the GAN pulls its input samples.

I’m not sure I completely understand, but if it’s the samples that the generator generates and then feeds into the discriminator, then the underlying distribution that defines how the generator generates samples won’t be a normal distribution in general. The generator itself can be thought of as some complex, multi-dimensional probability distribution .

I found this in Julia, which does exactly what I want to do with pennylane:
Quantum Circuit Born Machine · Documentation | Yao

The circuit used here (screenshot) can most certainly be implemented in PennyLane . It was simply trained to learn a gaussian distribution. You’d be able to follow pretty much exactly what’s outlined there in Yao, but in PennyLane!