I have installed pennylane-lightning-gpu and cuquantum ,Do these commands still need to be installed?

I have installed pennylane-lightning-gpu and cuquantum ,Do these commands still need to be installed?

After I installed pennylane-lightning-gpu and cuquantum, set the dev = qml.device(‘lightning.gpu’, wires =8), Do I need any other operations?
thanks

Hi @shangshang_shi,

If you use pip install "pennylane-lightning[gpu]" you will get the released version (currently v0.23.0). Unless you want to add features to the C++ backend explicitly and build the library manually, you don’t need to run those commands.

Please let me know if this works for you!

1 Like

Thank you for your help. I installed pennylane-gpu==0.23.0. But my codes only run in the pennylane v0.20.0. When the enviroment change into the pennylane v0.23.0, the optimizer can’t work, the parameters still unchange. Can you give me some suggestions? thank you again.

Hi @shangshang_shi, you’re probably using functions that have been deprecated in the most recent versions of PennyLane.

My suggestion would be to try to update your code to work with our latest version. For this you can follow the following steps:
1 - Run your code and note down the functions that cause errors in your program
2 - Search for those functions in our Release Notes
3 - If you find in the release notes that the functions you’re using have been deprecated then update them according to the new recommended functions as stated in the release notes.

If you can’t find what is the new usage or if you have any trouble following these recommended steps then please copy-paste the following information here so that we can help you:
1 - Your full code
2 - The full error traceback
3 - The output of qml.about()

Thanks for your help .
I have realized the opertions running on the GPU, and I confirm that it can run on the GPU.
But it can’t speed up, and slower than running on the CPU.
What’s the reasons causes slowness?
thanks again.

Hi @shangshang_shi,

I’m glad that you made it work now!

Usually when using a GPU there’s an overhead in setting up of the GPU device, as well as internal overheads in the cuQuantum library, which makes it slower for small numbers of qubits. As you increase the number of qubits over 20 qubits you will start to see an improvement vs using a CPU. The GPU execution is optimal for 21-30 range workloads. We should see improvements with future versions of the NVIDIA cuQuantum library releases though so stay tuned for more performance!

ok.
thanks for your answer.
You are so nice.
I will test 24qubits using GPU device, after that I’d like to discuss more with you about the results including the speed.
thank you again.

Yes, please share your results when you have them @shangshang_shi!