How to use the generator in the QGAN example

Could someone teach me how to use the generator in the QGAN example to generate the state after it is trained? Thanks!

Hi @cubicgate! Are you referring to this tutorial here? https://pennylane.ai/qml/demos/tutorial_QGAN.html

Yes, @josh. I would like to know how to use the generator to generate the state. Thanks!

Hi @cubicgate!

Here is a general description for inspecting the generator

First we need to decide what information we need from the generator. This may be one or more expectation values, in which case the generator function should return qml.expval(<observable1>), qml.expval(<observable2>), .... For the QGAN tutorial, we might want to look directly at the state. Since PennyLane focuses on the accessible information, i.e., expectation values of observables, it is a bit less direct to get the state. In this case, we can simply return the expectation value of the identity observable qml.expval(qml.Identity(0)) and query the state of the device after being called (see further down).

We then turn the generator function into a QNode by putting the decorator @qml.qnode(dev, ...) above the function definition.

The final step would be to evaluate the generator QNode using the weights we have trained: generator_output = generator(trained_weights). If we want the state of the generator, we can then do: state = device.state after evaluating the generator. Note that the state will be returned for all the qubits specified in the device.

Concretely for the QGAN tutorial

Instead of looking at the state of the first qubit for the generator and the real circuit, we can more easily evaluate their Bloch vectors. This can be done with the following lines of code at the end of the tutorial:

# create new functions with a modified signature
def generator_test(w, wires):
    qml.RX(w[0], wires=0)
    qml.RX(w[1], wires=1)
    qml.RY(w[2], wires=0)
    qml.RY(w[3], wires=1)
    qml.RZ(w[4], wires=0)
    qml.RZ(w[5], wires=1)
    qml.CNOT(wires=[0, 1])
    qml.RX(w[6], wires=0)
    qml.RY(w[7], wires=0)
    qml.RZ(w[8], wires=0)

def real_test(angles, wires):
    qml.Rot(*angles, wires=0)

# create QNodes that give us the Bloch vectors using qml.map
generator_test_qnode = qml.map(generator_test, [qml.PauliX(0), qml.PauliY(0), qml.PauliZ(0)], dev, interface="tf")
real_test_qnode = qml.map(real_test, [qml.PauliX(0), qml.PauliY(0), qml.PauliZ(0)], dev, interface="tf")

# evaluate the Bloch vector for both circuits
b_gen = generator_test_qnode(gen_weights)
b_real = real_test_qnode([phi, theta, omega])

print(b_gen)
print(b_real)

Note that the tutorial is mainly for demonstration purposes and the Bloch vectors might not be close - you can tweak the phi, theta and omega parameters as well as the learning rate and number of optimization steps to find improvements. Note that I also had to change:
qml.CNOT(wires=[1, 2])
to
qml.CNOT(wires=[0, 2])
in the discriminator.

Thanks @Tom_Bromley for your detailed help.

The real state parameters:
phi = 0.9
theta = -1.2
omega = -0.9

with qml.CNOT(wires=[1, 2]) in the discriminator, 50 steps of training,
we got:
predicted: [ 0.00105497 0.01029721 -0.99951743]
real: [-0.57936502 0.73009141 0.36235777]
with Prob(real classified as real): 0.9998971884888306

with qml.CNOT(wires=[0, 2]) in the discriminator, 50, 100, or 300 steps of training, from these three runs, we got something that is close to: Prob(real classified as real): 0.9117040708661079

from 50 steps:
predicted: [-0.4526622 0.42149708 0.78275646]
real: [-0.57936502 0.73009141 0.36235777]

from 100 steps:
predicted: [-0.35456184 0.44003057 0.82434738]

from 300 steps:
predicted: [-0.35340893 0.44040161 0.82456812]

In your tutorial, you used qml.expval(qml.PauliZ(2)) to measure the similarity of two states (real and fake). Should you use X, Y, and Z? or other ways to improve the learning?

Using the real state parameters from your tutorial:
phi = np.pi / 6
theta = np.pi / 2
omega = np.pi / 7

300 steps of discriminator training:
Prob(real classified as real): 0.852622963488102
predicted: [0.63841249 0.30082658 0.70847203]
real: [0.90096882 0.43388376 0. ]

I then used 200 steps for discriminator and generator for two times:
Prob(real classified as real): 0.8528591021895409
predicted: [0.63841484 0.3008326 0.7084675 ]

It seems no big change.

Hi @cubicgate!

We’ve had a look at this tutorial again and have added a pull request to improve it: Fix QGAN tutorial by trbromley · Pull Request #78 · PennyLaneAI/qml · GitHub. You can view the updated tutorial here: https://660-214003948-gh.circle-artifacts.com/0/_build/html/demos/tutorial_QGAN.html.

This update includes an analysis of the Bloch sphere representation and the two Bloch vectors are now similar.

To answer your question:

In your tutorial, you used qml.expval(qml.PauliZ(2)) to measure the similarity of two states (real and fake). Should you use X, Y, and Z? or other ways to improve the learning?

The idea here is that the discriminator circuit has to make a decision on whether its input is real or fake. We use qubit two to encode the decision: the probability of it being in the 0 or 1 state determines our prediction. When training the GAN, we want to have two phases:

  1. updating the discriminator weights to maximize the probability of correctly classifying real data while minimizing the probability of classifying fake data as real;
  2. updating the generator weights to adversarially increase the probability that fake data is classified as real.

By doing this, we should have ideally encoded the fact that the real data circuit state and the generated state should train to be similar. On the other hand, there’s no single correct approach to designing the GAN and it might be fun to try out different methods and see how they fair. We also made this tutorial primarily as a demonstration and do not focus too much on perfecting the training.

It would be interesting to try out different datasets: in this tutorial the generator and real data were quantum states - but it’s also possible to use classical real data and have the generator be a QNode outputting expectation values of observables, as I mentioned in the previous post.

Thanks @Tom_Bromley for your code and it works well. Could you help me to learn why you made these changes? For example, what is the benefit of using Hadamard in the following?
def real(angles, wires=None):
qml.Hadamard(wires=0)
qml.Rot(*angles, wires=0)

Thanks @cubicgate,

The most important change was updating the wires of the CNOT gate in the discriminator: from qml.CNOT(wires=[1, 2]) to qml.CNOT(wires=[0, 2]). This makes sense since we want the discriminator to access the first wire where the data is, and without the change the Bloch vectors would not agree.

With this change, the performance of the trained generator still depended upon the choice of angles phi, theta and omega in the real data circuit, i.e., for some choices the Bloch vectors were not very similar. It’s important to remember that this is really a prototype/toy model example QGAN and is not optimized.

With this in mind, I found that adding the Hadamard gate to circuits allowed things to train nicely.

1 Like

Thanks @Tom_Bromley for your explanation, which helped me to learn!

2 Likes

Hi, @Tom_Bromley can you share the link with the QGan tutorial with your changes?
This previous link does not works: You can view the updated tutorial here: [https://660-214003948-gh.circle-artifacts.com/0/_build/html/demos/tutorial_QGAN.html ]
I am trying to obtain good images using another dataset but RGB. Thank you

Hey @Eleonora_Panini, are you looking for this? Quantum generative adversarial networks with Cirq + TensorFlow | PennyLane Demos

Yes thank you, but can the demo work with images generation?

Yep! You’ll need to tweak the purpose of the model to be for an image-learning task, though; the generator needs to generate a measurement that can be post-processed feasibly into an image.

Ok thank you, can you provide an example of the GAN for images integrated with the code from the link?

Hey @Eleonora_Panini,

Unfortunately I don’t have the time/resources to do that :sweat_smile:. This demo might help though! Quantum GANs | PennyLane Demos