I have been using lightning qubit before in my hybrid neural network with 10 qubits and 100 epochs and it only took me 1.1561 hours for the execution, but when I changed the simulator to lightning.gpu, it took me 18 hours to simulate it. Also, I am using one NVIDIA GeForce RTX 3080 cores with 10GB of RAM and when I run the program the utilization runs up to 60 percent.
lightning.gpu can have some large overheads in time for small numbers of qubits. For around 20 and more qubits then you can start seeing a performance improvement in using GPU vs CPU.
I hope this helps!
Thank you for clarifying this one, I also checked everything especially the cuquantum and cuda, but it works just fine. Your answer is really helpful!
I’m glad @aouie! Let me know if you have any further questions.