Phase angles optimization for pseudoinverse in QSVT

Hello there,

Let me begin by saying I am really a rookie at quantum machine learning.

Nonetheless, I thought I could get some help from the experts here about what am I doing wrong. I have been trying to solve system of linear equations with the help of QSVT based on demo “QSVT in practice”. But I am facing a problem in finding the optimal phase angles to represent the pseudoinverse of a 3x3 information matrix which is neither Hermitian nor has a norm <= 1.

# Functions used:
# Functions used:

def sum_even_odd_circ(x, phi, ancilla_wire, wires):
    phi1, phi2 = phi[: len(phi) // 2], phi[len(phi) // 2:]
    qml.Hadamard(wires=ancilla_wire)  # equal superposition

    # apply even and odd polynomial approx
    qml.ctrl(qml.qsvt, control=(ancilla_wire,), control_values=(0,))(x, phi1, wires=wires)
    qml.ctrl(qml.qsvt, control=(ancilla_wire,), control_values=(1,))(x, phi2, wires=wires)

    qml.Hadamard(wires=ancilla_wire) 

def real_u(A, phi):
    qml.Hadamard(wires="ancilla2")
    qml.ctrl(sum_even_odd_circ, control=("ancilla2",), control_values=(0,))(A, phi, "ancilla3", [0, 1, 2, 3])
    qml.ctrl(qml.adjoint(sum_even_odd_circ), control=("ancilla2",), control_values=(1,))(A.T, phi, "ancilla3", [0, 1, 2, 3])
    qml.Hadamard(wires="ancilla2")


def threeD_U(A,phi):
    qml.Hadamard(wires="ancilla1")
    qml.ctrl(real_u,control=("ancilla1",),control_values=(0,))(A,phi)
    qml.ctrl(qml.adjoint(real_u),control=("ancilla1",),control_values=(1,))(A.T,phi)
    qml.Hadamard(wires="ancilla1")

# Optimization:
dim=3
a=np.array([[1,0],[1,1]],requires_grad=True)
A=ft.reduce(np.kron,[a]*dim)

b = np.array([-8.17,-7.58,-6.13,-6.24,-5.96,-7.70,-7.67,-8.45])
norm_b=np.linalg.norm(b)
normalized_b=b/norm_b
np.random.seed(50)
phi=np.random.rand(64)
phi.requires_grad=True

def target_fun(A,b):
    x=np.linalg.inv(A) @ b 
    return np.real(x/np.linalg.norm(x))

def loss_func(phi):
    qsvt_matrix = qml.matrix(sum_even_odd_circ, wire_order=["ancilla1", 0, 1 ,2, 3])(A,phi,ancilla_wire="ancilla1",wires=[0,1,2,3])[:8,:8]
    sum_square_error = np.real(np.sum(np.array(np.dot(np.real(qsvt_matrix),normalized_b) - target_fun(A,b)) ** 2))
    return sum_square_error/len(b)

cost=1
iter=0
opt = qml.AdagradOptimizer(0.1)
max_err=0.5e-4
while cost>max_err:

    iter += 1
    phi, cost = opt.step_and_cost(loss_func, phi)
    if iter % 20 == 0 or iter == 1:
        print(f"iter: {iter}, cost: {cost}")

    if iter > 500:
        print("Iteration limit reached!")
        break
    print(f"Completed Optimization! (final cost: {cost})")

# Information Matrix:
dim=3
a=np.array([[1,0],[1,1]], requires_grad=True)
A=ft.reduce(np.kron,[a]*dim)

# Input Vector:
b = np.array([-8.17,-7.58,-6.13,-6.24,-5.96,-7.70,-7.67,-8.45], requires_grad=False)


# Normalize states:

norm_b = np.linalg.norm(b)
normalized_b = b / norm_b

# True Solution
target_x = np.linalg.inv(A) @ b  
norm_x = np.linalg.norm(target_x)
normalized_x = target_x / norm_x

# Quantum Solution:

@qml.qnode(qml.device("default.qubit", wires=["ancilla1", "ancilla2", "ancilla3", 0, 1, 2, 3]))
def circuit(phi):

    qml.StatePrep(normalized_b,wires=[1,2,3])
    threeD_U(A.T,phi)
    return qml.state()


transformed_state = circuit(phi)[:8]  # first 8 entries of the state
rescaled_computed_x = transformed_state * norm_b
normalized_computed_x = rescaled_computed_x / np.linalg.norm(rescaled_computed_x)

print("target x:", np.round(normalized_x, 3))
print("computed x:", np.round(normalized_computed_x, 3))
# Output:
target x: [-0.822  0.059  0.205 -0.07   0.222 -0.234 -0.377  0.167]
computed x: [-0.463+0.j -0.319+0.j -0.298+0.j -0.298-0.j -0.306+0.j -0.318-0.j
 -0.307-0.j -0.465-0.j]

Hey @Muhammad_Daud! Welcome to the forum :sunglasses:

It looks like your code example is incomplete. I don’t have what I need to calculate

A=ft.reduce(np.kron,[a]*dim)

If you could copy-paste a complete example of your code that would be great! I’ll copy-paste and try to see what’s going on.

oh sorry that is
import functools as ft
other than that I only used pennylane, numpy

Hey @Muhammad_Daud,

I think the first order of business is to figure out why optimizing loss_func isn’t working:

Completed Optimization! (final cost: Autograd ArrayBox with value (0.13409210561908366+0.020936141459504162j))
Completed Optimization! (final cost: Autograd ArrayBox with value (0.13409210561908366+0.020936141459504162j))
Completed Optimization! (final cost: Autograd ArrayBox with value (0.13409210561908366+0.020936141459504162j))
Completed Optimization! (final cost: Autograd ArrayBox with value (0.13409210561908366+0.020936141459504162j))

The loss isn’t changing! I’ll have to dig into why that is, but that’s most likely why your code is giving you an incorrect answer. I’ll get back to you!