# Amplitude Damping in Optimization

Hello!
I am trying to experiment with amplitude damping in a QNN setting. Below is my 4-qubit circuit i am using:

``````wires = 4
layers = 2
dev = qml.device("default.mixed", wires=wires)
def quantum_node_1(rotations):
for i in range(layers):
#
for i in range(wires):
#
qml.RX(rotations[0][i], wires=i)
qml.RY(rotations[1][i], wires=i)

qml.AmplitudeDamping(sigmoid(rotations[1][1]), wires=0)
qml.AmplitudeDamping(sigmoid(rotations[1][1]), wires=1)
qml.AmplitudeDamping(sigmoid(rotations[1][1]), wires=2)
qml.AmplitudeDamping(sigmoid(rotations[1][1]), wires=3)
H = np.zeros((2 ** wires, 2 ** wires))
H[0, 0] = 1
wirelist = [i for i in range(wires)]
return qml.expval(qml.Hermitian(H, wirelist))

QNODE_1 = qml.QNode(quantum_node_1, dev)

rotations = [[np.random.uniform(low=-np.pi, high=np.pi) for i in range(wires)],
[np.random.uniform(low=-np.pi, high=np.pi) for i in range(wires)]]
rotations = np.array(rotations, requires_grad=True)
``````

The cost function i am trying to optimize is `1-(QNODE_1(rotations))**2`. To be fair with the optimizer to have somewhat same starting point to navigate through the optimization landscape, both RX and RY are initialized with same random values both in case of noise and no-noise.
My question is how to decide the appropriate value of `gamma` here since changing its value significantly affect optimization performance. For instance `qml.AmplitudeDamping(sigmoid(rotations[0][1]), wires=0) for all wires` yields better performance than the ones used in code above. What would be the appropriate and logical way of defining this value???

Secondly, is there an upper bound on the number of qubits for qubits.mixed device as I cant go beyond 10 qubits?
the output of `qml.about()`.
Name: PennyLane
Version: 0.32.0
Summary: PennyLane is a Python quantum machine learning library by Xanadu Inc.
Home-page: GitHub - PennyLaneAI/pennylane: PennyLane is a cross-platform Python library for differentiable programming of quantum computers. Train a quantum computer the same way as a neural network.
Author:
Author-email:
Location: /Users/mk9430/anaconda3/lib/python3.11/site-packages
Requires: appdirs, autograd, autoray, cachetools, networkx, numpy, pennylane-lightning, requests, rustworkx, scipy, semantic-version, toml, typing-extensions
Required-by: PennyLane-Lightning, PennyLane-qiskit

Platform info: macOS-13.6.2-arm64-arm-64bit
Python version: 3.11.4
Numpy version: 1.23.5
Scipy version: 1.10.1
Installed devices:

• default.gaussian (PennyLane-0.32.0)
• default.mixed (PennyLane-0.32.0)
• default.qubit (PennyLane-0.32.0)
• default.qubit.jax (PennyLane-0.32.0)
• default.qubit.tf (PennyLane-0.32.0)
• default.qubit.torch (PennyLane-0.32.0)
• default.qutrit (PennyLane-0.32.0)
• null.qubit (PennyLane-0.32.0)
• lightning.qubit (PennyLane-Lightning-0.32.0)
• qiskit.aer (PennyLane-qiskit-0.31.0)
• qiskit.basicaer (PennyLane-qiskit-0.31.0)
• qiskit.ibmq (PennyLane-qiskit-0.31.0)
• qiskit.ibmq.circuit_runner (PennyLane-qiskit-0.31.0)
• qiskit.ibmq.sampler (PennyLane-qiskit-0.31.0)
• qiskit.remote (PennyLane-qiskit-0.31.0)

Thanks.

By `gamma` do you mean the learning rate / step size of your optimizer? If you’re using an adaptive method like ADAM, then starting 0.5 seems reasonable, but you can play around with values in and around that and, say, an order of magnitude less. It’s trial and error
Also, I suggest upgrading to `pennylane==0.33` so that you get all of the latest and greatest performance updates!