Quantum GAN for RGB images

HI, I trained the model of quantum GAN for 800 epochs and the result is the same. How can we solve this problem? Did you obtain good images with your dataset?

Iteration: 10, Discriminator Loss: 1.313, Generator Loss: 0.624
Iteration: 20, Discriminator Loss: 1.154, Generator Loss: 0.634
Iteration: 30, Discriminator Loss: 0.906, Generator Loss: 0.650
Iteration: 40, Discriminator Loss: 0.761, Generator Loss: 0.677
Iteration: 50, Discriminator Loss: 0.732, Generator Loss: 0.717
Iteration: 60, Discriminator Loss: 0.660, Generator Loss: 0.753
Iteration: 70, Discriminator Loss: 0.630, Generator Loss: 0.786
Iteration: 80, Discriminator Loss: 0.612, Generator Loss: 0.819
Iteration: 90, Discriminator Loss: 0.564, Generator Loss: 0.855
Iteration: 100, Discriminator Loss: 0.535, Generator Loss: 0.891
Iteration: 110, Discriminator Loss: 0.516, Generator Loss: 0.923
Iteration: 120, Discriminator Loss: 0.494, Generator Loss: 0.959
Iteration: 130, Discriminator Loss: 0.465, Generator Loss: 0.996
Iteration: 140, Discriminator Loss: 0.444, Generator Loss: 1.037
Iteration: 150, Discriminator Loss: 0.421, Generator Loss: 1.081
Iteration: 160, Discriminator Loss: 0.400, Generator Loss: 1.121
Iteration: 170, Discriminator Loss: 0.381, Generator Loss: 1.157
Iteration: 180, Discriminator Loss: 0.364, Generator Loss: 1.210
Iteration: 190, Discriminator Loss: 0.340, Generator Loss: 1.261
Iteration: 200, Discriminator Loss: 0.321, Generator Loss: 1.301
Iteration: 210, Discriminator Loss: 0.305, Generator Loss: 1.348
Iteration: 220, Discriminator Loss: 0.288, Generator Loss: 1.395
Iteration: 230, Discriminator Loss: 0.269, Generator Loss: 1.459
Iteration: 240, Discriminator Loss: 0.255, Generator Loss: 1.510
Iteration: 250, Discriminator Loss: 0.234, Generator Loss: 1.575
Iteration: 260, Discriminator Loss: 0.223, Generator Loss: 1.633
Iteration: 270, Discriminator Loss: 0.204, Generator Loss: 1.700
Iteration: 280, Discriminator Loss: 0.193, Generator Loss: 1.756
Iteration: 290, Discriminator Loss: 0.179, Generator Loss: 1.824
Iteration: 300, Discriminator Loss: 0.191, Generator Loss: 1.869
Iteration: 310, Discriminator Loss: 0.160, Generator Loss: 1.925
Iteration: 320, Discriminator Loss: 0.167, Generator Loss: 2.029
Iteration: 330, Discriminator Loss: 0.145, Generator Loss: 2.033
Iteration: 340, Discriminator Loss: 0.132, Generator Loss: 2.104
Iteration: 350, Discriminator Loss: 0.128, Generator Loss: 2.137
Iteration: 360, Discriminator Loss: 0.118, Generator Loss: 2.209
Iteration: 370, Discriminator Loss: 0.109, Generator Loss: 2.289
Iteration: 380, Discriminator Loss: 0.105, Generator Loss: 2.323
Iteration: 390, Discriminator Loss: 0.097, Generator Loss: 2.406
Iteration: 400, Discriminator Loss: 0.093, Generator Loss: 2.446
Iteration: 410, Discriminator Loss: 0.089, Generator Loss: 2.483
Iteration: 420, Discriminator Loss: 0.084, Generator Loss: 2.530
Iteration: 430, Discriminator Loss: 0.077, Generator Loss: 2.623
Iteration: 440, Discriminator Loss: 0.080, Generator Loss: 2.577
Iteration: 450, Discriminator Loss: 0.077, Generator Loss: 2.654
Iteration: 460, Discriminator Loss: 0.074, Generator Loss: 2.664
Iteration: 470, Discriminator Loss: 0.067, Generator Loss: 2.754
Iteration: 480, Discriminator Loss: 0.068, Generator Loss: 2.759
Iteration: 490, Discriminator Loss: 0.065, Generator Loss: 2.776
Iteration: 500, Discriminator Loss: 0.062, Generator Loss: 2.821

<img src="upload://j93LBZHgBfMUf2o2yJ8J3GNDgn5.png" alt="image.png" width="475" height="505">

Hi @Eleonora_Panini, you mentioned that Teorically a quantum generator should be more perfomant than a standard one. However this is not true. Quantum versions of classical algorithms most of the time perform worse. They only perform better on tasks that are actually well suited for quantum computers. I’m not saying that you cannot get a quantum GAN to work, as you can see in the PennyLane demo you can indeed get it to work for one specific dataset, but it’s not a guarantee that it will work for all datasets or in any reasonable time.

This blog post by one of the best researchers in the field of Quantum Machine Learning can give you a very good perspective on this.

Let me know if you have any questions.

Thank you for the information, the training is completed, so the code seems to work, but the resulting images are black. Maybe as you tell me, quantum GAN is not suitable for an RGB dataset. I can try to applythe same model to a greyscale of 64x64 images in order to verify if it works.
indeed I read these articles:

https://arxiv.org/pdf/2212.11614.pdf
https://arxiv.org/pdf/2010.06201.pdf
And their datasets are composed by greyscale images, like handswritten digits. Maybe, an RGB dataset is not suitable for quantum GAN.
Although I found this paper where a CNN for RGB image classification (not generation) was implemented:

https://arxiv.org/pdf/2107.11099.pdf
Another problem maybe is that working with an RGB dataset requires an HPC and not a laptop, that is suitable only for greyscale images, because of the complexity of RGB.

I know that this model works for grayscale images.

1 Like

@Eleonora_Panini @CatalinaAlbornoz, can we use quantum GAN for RGB images but with a different quantum circuit? Is there any global quantum GAN circuit that is useful for most datasets? Or is that a limitation?

I didn’t find any model, paper or code of quantum GAN with RGB images posted on network. Our model is the patch quantum GAN and it is a hybrid model, some papers that I linked in the previous mail talk about this but they are applied to greyscale images. I think our implementation for RGB is correct becuase the training works, the problems of the wrong output may be: the network with RGB requires a more performant hardware otherwise we need to change something into the quantum generator. I noticed that in the standard dcgan pytorch the normal generator takes in input 4 dimension tensor (batch size, latent dim, 1, 1), instead the quantum generator takes 2 dimension tensor (batch size, n-
quibits). Our images are packed as (batch size, 64, 64, 3) so maybe we need to change something in the quantum generator. Can we work together in order to solve the problem and obtain the correct output ? Is there someone on Pennylane team who can help us?

Do you think we should have three generators for three channels, i.e., RGB? Then we should concatenate each channel into one big image.

In the greyscale demo there were 4 subgenerators and in our RGB model I increased them to 6. I think this is correct because I made a lot of tests in order to adapt the model to 64x64 images and other increasing the number of qubits from 5 to 13 and ancillary from 1 to 2, it was necessary increasing the number of generators to 6. Do you mean 3 generator for 3 channels= 3x3=9 generators? I can try if it works…but I think that the matter may be the structure of the patch circuit and generator…maybe it is structured only for grayscale images so it does not consider the 3 channels…but I don’t know how to change it…

So there is a formula which governs the number of qubits to image size:

image

So in our case Ng = 6 and N = 13 and Na = 2. So Ng*2^(N-Na) = 6 * 2^(13-2) = 12288.

What I am proposing use the same quantum circuit 3 times for each channel and then concatenate into one image.

So, we can use 4 subgenerators with 12 qubits and 2 ancillary qubits. We get 4 * 2^(10) = 4096 = 64 x 64 = image size. We can use this quantum circuit 3 times and then concatenate the results.

Ok so in this way the result is always 12288 (4096x3). Should we replicate the @qnode section 3 times? Is it enough rewriting the code 3 times or is there another way to replicate this? I don’t know also how to concatenate the circuits. I don’t find any docs about on Pennylane. Do you have an idea for the code implementation?

I was thinking more like this:

fake_red = generator(noise)
fake_green = generator(noise)
fake_blue = generator(noise)

fake_red = fake_red * 255/0.299
fake_green = fake_green * 255/0.587
fake_blue = fake_blue * 255/0.144

fake = torch.cat((fake_red,fake_green,fake_blue),0)

This is just my assumption. Do you think this would be helpful?

I insert this block into the training cell, and the parameters are nqubits=12, ancillary=2, depth=2, subgenerators=4:

noise = torch.rand(batch_size, n_qubits, device=device) * math.pi / 2 #noise=(32,13)
fake_red = generator(noise)
fake_green = generator(noise)
fake_blue = generator(noise)

fake_red = fake_red * 255/0.299
fake_green = fake_green * 255/0.587
fake_blue = fake_blue * 255/0.144

fake_data = torch.cat((fake_red,fake_green,fake_blue),-1)

image.png

Okay, so now the white colors are coming. Maybe try increasing the num_iter and q_depth. See if it changes anything.

And also did you do with test_images? We may need test_images_red, test_images_green, and test_images_blue.

Yes I changed test images:

if counter % 10 == 0:
print(f’Iteration: {counter}, Discriminator Loss: {errD:0.3f}, Generator Loss: {errG:0.3f}')

test_images_red = generator(fixed_noise)
test_images_green = generator(fixed_noise)
test_images_blue = generator(fixed_noise)
test_images_red = test_images_red * 255/0.299
test_images_green = test_images_green * 255/0.587
test_images_blue = test_images_blue * 255/0.144

test_images = torch.cat((test_images_red,test_images_green,test_images_blue),-1).view(batch_size,3,image_size,image_size).cpu().detach()

After 100 epochs with learning rate 0.01 for both generator and discriminator the output is:
The generator loss tends also in this case to reach 100.000, so is underfitting. Do you think the learning rate must be higher for the generator and lower for the discriminator or equal?
I’m trying now with depth=4…if the RAM won’t crash.

So I saw the output for the real images and the fake ones.

If you see the matrices, you can see the differences

real_batch = next(iter(dataloader))
a = real_batch[0]
a = a.numpy()
print(a[0][1][:][:])
print(test_images[0][1][:][:].numpy())

Output:

real_batch = 
[[-0.7882353  -0.5686275  -0.5529412  ... -0.5529412  -0.6156863
  -0.3333333 ]
 [-0.5137255   0.45882356  0.5529412  ...  0.5529412   0.32549024
  -0.21568626]
 [-0.49019605  0.58431375  0.6627451  ...  0.6784314   0.41960788
  -0.19215685]
 ...
 [-0.7647059  -0.4588235  -0.42745095 ...  0.5921569   0.41960788
  -0.41960782]
 [-0.79607844 -0.5137255  -0.4823529  ...  0.41176474  0.38823533
  -0.3490196 ]
 [-0.85882354 -0.84313726 -0.827451   ... -0.4352941  -0.4352941
  -0.7019608 ]]

test_images = 
[[4.09724344e-05 2.33332589e-08 8.59169575e-07 ... 5.23343006e-05
  4.02622973e-04 5.17608505e-03]
 [6.01778098e-04 2.94335138e-07 1.33410676e-05 ... 1.04329595e-03
  8.02774075e-03 1.03195146e-01]
 [7.78472167e-04 4.51466207e-07 1.62094675e-05 ... 1.60375261e-03
  1.23383729e-02 1.58619478e-01]
 ...
 [7.70972390e-03 7.33722979e-03 9.63792700e-05 ... 4.95565534e-02
  1.57917396e-03 3.70057487e+00]
 [4.66814981e-06 5.19347759e-06 8.13918533e-08 ... 3.49226434e-06
  1.05182934e-07 2.58500251e-04]
 [1.12271938e-03 1.06966053e-03 1.40698239e-05 ... 7.15183932e-03
  2.27871569e-04 5.34043968e-01]]

So I think we need to normalize the fake ones.

The values are really different, i agree with you for the normalization. We can try with ‘torchvision.transforms.Normalize().’ Or do you have other solutions for the normalization? We normalize test images during the training, for instance:
Torchvision.transform.Normalize(test-images).view(batch size,3,64,64).cpu.detach()

Try normalizing it either by this transform.Normalize() or by checking the internet.

I don’t understand which value as mean and standard deviation i have to insert into the function.
this is an example from
https://www.geeksforgeeks.org/how-to-normalize-images-in-pytorch/amp/
Which value I need to put replacing 1 and 2? The mean of all values of the matrix fake images?

mean, std ``= img_tr.mean([``1``,``2``]), img_tr.std([``1``,``2``])

I tried with
test_images_red = (test_images_red * 255/0.299).normal_()
test_images_green = (test_images_green * 255/0.587).normal_()
test_images_blue = (test_images_blue * 255/0.144).normal_()
This is test_images:

[[-1.1716092   2.6942506   0.54948026 ... -0.19643615 -0.12131502
   0.46801642]
 [ 0.00794261 -0.4683865  -1.4243741  ... -0.5513503  -2.0491612
   1.0240767 ]
 [-0.01530833  0.7667712  -2.8037696  ...  0.22845522  0.54489636
   0.89558566]
 ...
 [ 0.47998622 -0.8613797   0.37270096 ... -0.15087974 -0.26750377
   1.2982064 ]
 [-1.4559369  -0.02880584  0.48121873 ...  1.2115003  -0.5613345
  -0.31234595]
 [ 0.9679267   0.87197465  1.2925458  ...  1.2322869   0.91305363
  -0.02658005]]

But there is a problem about the loss (this is from 400 to 500 epochs):

image.png

Try increasing it to 1000 or more.