Encountering 'Casting complex values to real discards the imaginary part' When Trying to Train a DiagonalQubitUnitary

Hello! I’m trying to train a diagonal unitary n-qubit operator with the form of diag(exp(i\theta_1), exp(i\theta_2), …, exp(i\theta_(2^n))). The \theta_1, \theta_2, …, \theta_(2^n) are real parameters. I use a DiagonalQubitUnitary to implement it. However, I constantly encounter the warning numpy_wrapper.py:156: ComplexWarning: Casting complex values to real discards the imaginary part at the start of the training process and the parameters seem to change randomly during the training.

The code of the cost function is below. It evaluates the fidelity between the target state and the actual state.

N=3
dev=qml.device('lightning.gpu', 2*N+1)

@qml.qnode(dev)
def testDiag2(params, target_D):
    diags=qml.math.exp(params)
    ini_state=getstates(N, True)
    tar_state=target_D @ ini_state
    qml.StatePrep(ini_state, range(N))
    qml.StatePrep(tar_state, range(N+1, 2*N+1))
    qml.DiagonalQubitUnitary(diags, range(N))
    return swaptest(range(N), range(N+1, 2*N+1), N)

The getstates() function is to create a random quantum state, and the swaptest() function is to perform the swaptest circuit to measure the fidelity between two states. There seems to have no problem in them because I have used them for dozens of times. Their codes are below.

def getstates(n, complex=False):
    state_r=np.random.rand(2**n)
    while np.linalg.norm(state_r)<10**(-8):
        state_r=np.random.rand(2**n)
    if complex:
        state_i=np.random.rand(2**n)
        while np.linalg.norm(state_i)<10**(-8):
            state_i=np.random.rand(2**n)
        state=state_r+1j*state_i
    else:
        state=state_r
    state=state/np.linalg.norm(state)
    return state

def swaptest(wiresA, wiresB, ancillary:int):
    if len(wiresA)!=len(wiresB):
        raise ArithmeticError('Cannot evaluate the fidelity of 2 groups of qubits with different numbers.')
    for i in range(len(wiresA)):
        if not i:
            a=qml.SWAP(wires=[wiresA[0], wiresB[0]])
        else:
            a=a @ qml.SWAP(wires=[wiresA[i], wiresB[i]])
    qml.Hadamard(ancillary)
    qml.ctrl(a, ancillary)
    qml.Hadamard(ancillary)
    return qml.expval(qml.Z(ancillary))

The code of the training process is below. I try to let the DiagonalQubitUnitary learn a given diagonal unitary with the same form. You may doupt why I start 2 round of training before entering the while loop. The aim of that is only to create the variable prev_cost and cur_cost and print the initial fidelity. I put a comment at the end of the line where the warning message comes out.

params=np.random.rand(2**N)*2*cmath.pi
tar_param=[]
for i in range(2**N):
    tar_param.append(cmath.exp(1j*random.uniform(0, 2*cmath.pi)))
target_D=np.diag(tar_param) # The diagonal matrix to learn
opt=qml.AdamOptimizer(-0.02, beta2=0.95) 
params, prev_cost=opt.step_and_cost(lambda p: testDiag2(p, target_D), params) # The warning appears immediately after this line
print(f'Initial fidelity is {prev_cost}')
params, cur_cost=opt.step_and_cost(lambda p: testDiag2(p, target_D), params)
number_it=[1]
cost=[cur_cost]
converge_count=0
while converge_count<5 and i<1000:
    if cur_cost-prev_cost<0.01 and cur_cost-prev_cost>0 and cur_cost>0.5:
        con_count+=1
    else:
        con_count=0
    prev_cost=cur_cost
    params, cur_cost=opt.step_and_cost(lambda p: testDiag2(p, target_D), params)
    number_it.append(i+1)
    cost.append(cur_cost)
    i+=1
    if not(i%100):
        print(f'Training round {i}, current fidelity is {cur_cost}')
print(f'Training iteration number is {i}, the final fidelity is {cur_cost}')
plt.plot(number_it, cost)
plt.show()

The output (including the warning message) are below.

/home/timmy/.local/lib/python3.12/site-packages/autograd/numpy/numpy_wrapper.py:156: ComplexWarning: Casting complex values to real discards the imaginary part
  return A.astype(dtype, order, casting, subok, copy)
Initial fidelity is 0.0025128937783195455
Training round 100, current fidelity is 0.1642507348423722
Training round 200, current fidelity is 0.0001596214698097449
Training round 300, current fidelity is 0.049613509573268336
Training round 400, current fidelity is 0.12555744236143396
...

The ouput of qml.about():

Name: PennyLane
Version: 0.40.0
Summary: PennyLane is a cross-platform Python library for quantum computing, quantum machine learning, and quantum chemistry. Train a quantum computer the same way as a neural network.
Home-page: https://github.com/PennyLaneAI/pennylane
Author: 
Author-email: 
License: Apache License 2.0
Location: /home/timmy/.local/lib/python3.12/site-packages
Requires: appdirs, autograd, autoray, cachetools, diastatic-malt, networkx, numpy, packaging, pennylane-lightning, requests, rustworkx, scipy, tomlkit, typing-extensions
Required-by: PennyLane_Lightning, PennyLane_Lightning_GPU

Platform info:           Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.39
Python version:          3.12.3
Numpy version:           2.2.2
Scipy version:           1.14.1
Installed devices:
- lightning.gpu (PennyLane_Lightning_GPU-0.40.0)
- lightning.qubit (PennyLane_Lightning-0.40.0)
- default.clifford (PennyLane-0.40.0)
- default.gaussian (PennyLane-0.40.0)
- default.mixed (PennyLane-0.40.0)
- default.qubit (PennyLane-0.40.0)
- default.qutrit (PennyLane-0.40.0)
- default.qutrit.mixed (PennyLane-0.40.0)
- default.tensor (PennyLane-0.40.0)
- null.qubit (PennyLane-0.40.0)
- reference.qubit (PennyLane-0.40.0)

I’m new to Pennylane. If you also have some suggesstions of my coding, please tell me.

Hi @Tz_19 ,

That ComplexWarning can sometimes arise but there shouldn’t be any reason to worry. It often happens when you have real + 0j kind of numbers. In this case I would suggest checking with a small example or printing one output just to be sure that everything’s as expected. But in general I wouldn’t worry too much about that warning.

I find the problem. The first line of my cost function diags=qml.math.exp(params) is wrong. It should be diags=qml.math.exp(params*1j). Sorry for occupying your time. @CatalinaAlbornoz

1 Like

It’s great to see you found the issue @Tz_19 !
Thanks for posting the question and solution here. It may help others who sun into similar issues in the future!