Reduce preprocessing time

Hey,

I have 20000 images and was using the preprocessing mentioned in the quanvolutional blog post for mnist gray scale images.

Problems is it’s taking 20 sec for 10 images. So will take more than 10 hrs just for pre processing.

Any faster approaches or suggestions?

Hey @Jay_Timbadia! Just want to make sure that you’re talking about this demo?

This one.

Okay good we’re talking about the same one! :sweat_smile:

Is this the code block that’s specifically causing you long runtimes?

Yes. Maybe its made for QPU’s, and I am running of CPU, is that the issue? But is there way to make faster?

Well, there are 14^2 = 196 circuit evaluations when you call the function quanv. Then in this preprocessing code block, for every image in your dataset, quanv gets called. Therefore, 196n circuit evaluations needs to happen, where n is the number of images in your dataset. It is a lot of circuit evaluations for you if n = 20,000. That part of your problem is unavoidable. Is the circuit the issue?

For the given circuit in the demo, there are 4 qubits and 2 layers (a qml.RY layer and then one layer of qml.templates.RandomLayers. I’m not entirely convinced that the circuit size is causing big runtime issues. Which leads me to the device!

Try using dev = qml.device("lightning.qubit", wires=4) in place of the "default.qubit" device. It’s a much faster simulator device that has a C++ backend :slight_smile:.

Other than that, I’m not sure there are other things one can do to expedite the preprocessing. But, let me know how "lightning.qubit works for you!