Amplitude Damping in Optimization

Hello!
I am trying to experiment with amplitude damping in a QNN setting. Below is my 4-qubit circuit i am using:

wires = 4
layers = 2
dev = qml.device("default.mixed", wires=wires)
def quantum_node_1(rotations):
     for i in range(layers):
#         
        for i in range(wires):
#             
            qml.RX(rotations[0][i], wires=i)
            qml.RY(rotations[1][i], wires=i)
        qml.broadcast(qml.CZ, wires=range(wires), pattern="chain")
        
        qml.AmplitudeDamping(sigmoid(rotations[1][1]), wires=0)
        qml.AmplitudeDamping(sigmoid(rotations[1][1]), wires=1)
        qml.AmplitudeDamping(sigmoid(rotations[1][1]), wires=2)
        qml.AmplitudeDamping(sigmoid(rotations[1][1]), wires=3)
    H = np.zeros((2 ** wires, 2 ** wires))
    H[0, 0] = 1
    wirelist = [i for i in range(wires)]
    return qml.expval(qml.Hermitian(H, wirelist)) 

QNODE_1 = qml.QNode(quantum_node_1, dev)

rotations = [[np.random.uniform(low=-np.pi, high=np.pi) for i in range(wires)], 
             [np.random.uniform(low=-np.pi, high=np.pi) for i in range(wires)]]
rotations = np.array(rotations, requires_grad=True)

The cost function i am trying to optimize is 1-(QNODE_1(rotations))**2. To be fair with the optimizer to have somewhat same starting point to navigate through the optimization landscape, both RX and RY are initialized with same random values both in case of noise and no-noise.
My question is how to decide the appropriate value of gamma here since changing its value significantly affect optimization performance. For instance qml.AmplitudeDamping(sigmoid(rotations[0][1]), wires=0) for all wires yields better performance than the ones used in code above. What would be the appropriate and logical way of defining this value???

Secondly, is there an upper bound on the number of qubits for qubits.mixed device as I cant go beyond 10 qubits?
the output of qml.about().
Name: PennyLane
Version: 0.32.0
Summary: PennyLane is a Python quantum machine learning library by Xanadu Inc.
Home-page: GitHub - PennyLaneAI/pennylane: PennyLane is a cross-platform Python library for differentiable programming of quantum computers. Train a quantum computer the same way as a neural network.
Author:
Author-email:
License: Apache License 2.0
Location: /Users/mk9430/anaconda3/lib/python3.11/site-packages
Requires: appdirs, autograd, autoray, cachetools, networkx, numpy, pennylane-lightning, requests, rustworkx, scipy, semantic-version, toml, typing-extensions
Required-by: PennyLane-Lightning, PennyLane-qiskit

Platform info: macOS-13.6.2-arm64-arm-64bit
Python version: 3.11.4
Numpy version: 1.23.5
Scipy version: 1.10.1
Installed devices:

  • default.gaussian (PennyLane-0.32.0)
  • default.mixed (PennyLane-0.32.0)
  • default.qubit (PennyLane-0.32.0)
  • default.qubit.autograd (PennyLane-0.32.0)
  • default.qubit.jax (PennyLane-0.32.0)
  • default.qubit.tf (PennyLane-0.32.0)
  • default.qubit.torch (PennyLane-0.32.0)
  • default.qutrit (PennyLane-0.32.0)
  • null.qubit (PennyLane-0.32.0)
  • lightning.qubit (PennyLane-Lightning-0.32.0)
  • qiskit.aer (PennyLane-qiskit-0.31.0)
  • qiskit.basicaer (PennyLane-qiskit-0.31.0)
  • qiskit.ibmq (PennyLane-qiskit-0.31.0)
  • qiskit.ibmq.circuit_runner (PennyLane-qiskit-0.31.0)
  • qiskit.ibmq.sampler (PennyLane-qiskit-0.31.0)
  • qiskit.remote (PennyLane-qiskit-0.31.0)

Thanks.

Hey @Muhammad_Kashif,

By gamma do you mean the learning rate / step size of your optimizer? If you’re using an adaptive method like ADAM, then starting 0.5 seems reasonable, but you can play around with values in and around that and, say, an order of magnitude less. It’s trial and error :sweat_smile:

Secondly, is there an upper bound on the number of qubits for qubits.mixed device as I cant go beyond 10 qubits?

In PennyLane, there is a hard cap of 32 qubits — the source of the hard cap is NumPy. If you’re unable to go above 10 qubits, your computer may not have enough RAM to support such a calculation. Make sure you close other tabs, programs, etc., that are running in the background while you run your code. If that’s not helping, you can look into running your code on something like Google Collab.

Also, I suggest upgrading to pennylane==0.33 so that you get all of the latest and greatest performance updates!

Let me know if this helps :slight_smile: