Memory requirement for simulating several qubits

Hi,

I have been trying to use a fully quantum circuit (around 10 qubits) for my QML project in colab in GPU. It works well if I have like small number of qubits. But if I keep increasing number of qubits used and the trainable parameters, the memory requirement explodes. In my experiment, if I use 7 qubits and like 30 - 40 parameter, it requires like 7 - 8 GB of memory. As colab only gives around 15 GB of memory on GPUs, this is a very big part compared to traditional DL model.

My question is: is there anyway to lower this like not using complex64 datatype or is there anyway to do this?

Thanks

Hi,

If you need to have a look at the code, please have a look at the following:

https://github.com/shutengW/QNeRF/blob/main/QNeRF_old.ipynb

I am running this on colab in GPU. The memory requirements just stays very high with not many qubits.

thank you for the help

Hello @Daniel_Wang, Nvidia A100 on Colab should allow for 40GB RAM. Adding qubits will increase RAM requirements exponentially; limiting 2 qubit gates can help. 'https://github.com/kevinkawchak/Medical-Quantum-Machine-Learning/blob/main/03%20Medical%20Quantum%20Forecast%206-5-23.md

1 Like

Hey @Daniel_Wang!

There’s a blurb on the pennylane-lightning-gpu page that contains some tips / tricks with managing memory requirements.

https://docs.pennylane.ai/projects/lightning-gpu/en/latest/devices.html

Scroll down to " Parallel adjoint differentiation support:"

Let me know if this helps!