@CatalinaAlbarnoz,
I also tried with the cutting circuit that works both on the demo and on my code changing the number of qubits (wires on the device), but if I run the code with the entire dataset and 32 batch size, it gets the same problem running infinite time without any results. Can you try running my code that I shared with you on github?
The only way I found in order to get results is using a reduced dataset (30 instead of 300 images) and batch size=1. I trained the mode for 200 iterations but when I try to plot the images with this code I get an error:
fig = plt.figure(figsize=(10, 5))
outer = gridspec.GridSpec(5, 2, wspace=0.1)
for i, images in enumerate(results):
inner = gridspec.GridSpecFromSubplotSpec(1, images.size(0),
subplot_spec=outer[i])
images = torch.squeeze(images, dim=1)
for j, im in enumerate(images):
ax = plt.Subplot(fig, inner[j])
ax.imshow(im.numpy())
ax.set_xticks()
ax.set_yticks()
if j==0:
ax.set_title(f’Iteration {50+i*100}', loc=‘left’)
fig.add_subplot(ax)
plt.show()
Invalid shape (3, 64, 64) for image data
I tried to transpose or permute images:
for k in range(len(test_images)) :
fig, axs = plt.subplots(1, 1, sharey=False, tight_layout=True, figsize=(2,2), facecolor=‘white’)
#axs.matshow(np.squeeze(test_images[k].permute(1,2,0)))
axs.matshow(test_images[k].T)
But the result is this:
Do you think I made a mistake in visualizing the image correctly or there is a problem during the training?