PennyLane Challenge: Universality of single-qubit gates

Use this topic to ask your questions about the PennyLane Challenge: Universality of single-qubit gates.

I have implemented the error function as

def error(U, params):
    matrix = get_matrix(params)
    z = np.ravel(np.square(np.abs(U - matrix)))
    z = np.sum(qml.math.toarray(z))
    return z.item()

which is \displaystyle\sum_i\sum_j|U_{ij}-M_{ij}|^2.

With the given seed, the optimization passes the [[1, 0], [0, -1]] case rather easily, but fo test case [[0.70710678, 0.70710678], [0.70710678, -0.70710678]] with this error function the optimization does not converge fast enough.

I have also tried with taking max instead of sum but still, even though they are good enough to pass the [[1, 0], [0, -1]] case the other case still does not converge.

    matrix = get_matrix(list(params))
    z = qml.math.toarray(np.max(np.abs((np.ravel(U - matrix)))))
    return z.item()

Any hints would appreciated.

Hi @LdBeth ,

It’s much simpler than that. You don’t need to use np.ravel or many of the other things you’re using. You want U and matrix to be the same. What’s the simplest way that you can calculate a single number that tells you how different these two matrices are? The solution isn’t fancy but it works!

I hope this helps.

I have to hand roll these because the most obvious to me np.linalg.norm does not work here,

Runtime Error: Failed to execute run() function: loop of ufunc does not support argument 0 of type ArrayBox which has no callable sqrt method

If norm doesn’t work, I guess I should go for trace the of differences?

I think I know it is using the autograd package to compute the derivative but unfortunately I have no clue on where I can learn what functions are supported and apparently not all numpy function are included, for example sqrt.

Thanks, and it turns out the fault is not cause by my error function, in fact \sum_i\sum_j|U_{ij}-M_{ij}|^2 is the correct way to go and my first error function is almost correct. (and it is certainly fine to use np.ravel.

The issue is with my get_matrix implementation, for some strange beliefs I wrote np.matmul(rz2, rx, rz1), although give a (2, 2) shaped array, which turns out is not the same as (rz2 @ rx @ rz1), after fixed this all tests are passed.

As a suggestion, maybe it would be better if additional checks to the get_matrix result is added

I’m glad you solved it @LdBeth ! And thanks for the suggestion, I will share it with our team.

Oh, I forget to check the private test, which failed without revealing the reason why it has failed, and I now I have really no clues. My implementation is attached below.

def get_matrix(params):
    alpha, beta, gamma, phi = params

    a, b = np.cos(beta / 2), -1j * np.sin(beta / 2)
    rx = np.matrix([[a, b], [b, a]])
    rz1 = np.matrix([[np.exp(-1j * alpha / 2), 0], [0, np.exp(1j * alpha / 2)]])
    rz2 = np.matrix([[np.exp(-1j * gamma / 2), 0], [0, np.exp(1j * gamma / 2)]])
    return np.exp(1j * phi) * (rz2 @ rx @ rz1)

def error(U, params):
    # Put your code here #
    # Return the error
    return sum(sum(qml.math.toarray(np.square(np.abs(U-matrix)))))

Hi @LdBeth ,

For the get_matrix function you can use qml.RZ() and qml.RX() to create a unitary. Then you can use qml.matrix() to turn that unitary into a matrix.

For the error you don’t need to square it or turn it into an array.

1 Like

I see, thanks! So that is all because the matrices I manually computed has caused much worse autograd performance, which gives more than 0.01 difference in the optimized result and failed to converge for other tests.

That could be it @LdBeth !