Making quanvolutional neural net weights trainable

Hi @glassnotes,

I updated the code in the post above post above.

It should work now out of the box. If not, check out my repo at github:

Best regards,

@dymat thank you for posting the code I will try to run it and get back to you asap :slight_smile:

1 Like

Thanks @dymat, it’s very cool that you had some positive results training on MNIST (at least for the 1 quantum layer). If you’re interested, I think this would make a great demo! We have just launched a community space for demos. Since you already have a Jupyter Notebook, is should just be a case of creating an issue here (see instructions).

On to your question about the gradient for two layers, I had a quick look and unfortunately nothing jumped out as a problem. I might have a chance to dig a bit deeper next week, however for now I would suggest some debugging. For example, what if we swapped out the quantum circuit (self.circuit) with a simple torch function? If you still aren’t getting a gradient in the first layer, it’s probably an issue with the big for loop in forward(). Otherwise, we can look at if it’s an issue on the PL side.

Hope this helps a bit!

@dymat went through your code and ran it a few times with some changes. I haven’t found the source of the gradient problem yet. But will still keep looking and update if i find something!

Hi @Tom_Bromley,

publishing our notebook as a demo sounds cool. I would use the one-layer example. Can we change the demo notebook later on, if we find the source of the gradient problem in two-layer networks?

Best regards,

Ok, I’ve checked the instructions and see that my question is obsolete, since the code lives in my repo.

1 Like

Hi @dymat, awesome, we’re looking forward to seeing your demo! Let us know if you have any further questions while getting it ready.

Hi @glassnotes, the demo request is submitted (

Best regards,