My TN is reasonable fast for only 6 qubits. Increasing to higher numbers make it even slower than default.qubits. Here are what I tried
Adding those right at line 1 of my script
import os
os.environ["OPENBLAS_NUM_THREADS"] = "1" # also tried with "8" for all of them
os.environ["MKL_NUM_THREADS"] = "1"
os.environ["OMP_NUM_THREADS"] = "1"
os.environ["NUMBA_NUM_THREADS"] = "8"
I am using OPENBLAS but adding other too just in case
Setting
max_bond_dim: 2
cutoff: 0.0000000001
for device dev
Creating my qnode as qml.QNode(circuit, dev, interface=None) # no gradient, I plan to use different kind of optimizer since I don’t think default.tensor supports gradient.
My circuit looks like
def circuit(feature, theta, gate_gens):
qml.AmplitudeEmbedding(feature, wires=self.dev.wires, pad_with=0.0)
for idx, gen in enumerate(gate_gens):
# gen is a SparseHamiltonian
qml.TrotterProduct(
-1j * theta[idx] * gen, time=2, order=2, check_hermitian=False
)
return qml.state()
What else can I try to make it simulate a circuit with about < 25 wires?
Number of qubits: 100
Result: 1.0000000000000002
Execution time: 1.0385 seconds
Number of qubits: 125
Result: 1.0000000000000002
Execution time: 1.3499 seconds
Number of qubits: 150
Result: 1.0000000000000002
Execution time: 1.6048 seconds
Number of qubits: 175
Result: 1.0000000000000002
Execution time: 2.1320 seconds
Number of qubits: 200
Result: 1.0000000000000002
Execution time: 2.6462 seconds
I bumped these number of qubits
I noted that the gates being used there are single gate, while I am doing Trotterization of a Hamiltonian that have ~40 terms
/python3.11/site-packages/cotengra/hyperoptimizers/hyper.py:54: UserWarning: Couldn't find `optuna`, `cmaes`, or `nevergrad` so will use completely random sampling in place of hyper-optimization.
The issue here seems to be caused by AmplitudeEmbedding using up all of your RAM. It is a known problem in quantum computing that inputting large datasets is not a great idea, it’s actually a big bottleneck, especially when things aren’t optimized.
AmplitudeEmbedding uses qml.StatePrep under the hood. This is optimized to work with some devices such as default.qubit but not default.tensor. So it will end up running Mottonen which will quickly use up all of the RAM.
So my recommendation would be to switch to default.qubit in case you want to keep AmplitudeEmbedding, or switch to using less input data and a different embedding in case you want to keep using default.tensor.