Autograd ArrayBox type intermediate output from optimizer

Hi not sure if this has been addressed before but when I try to run qml.GradientDescentOptimizer on a cost function with intermediate costs, I noticed the output types of these (global) variables are ‘autograd.numpy.numpy_boxes.ArrayBox’.

Is there a way I can output these variables with the usual ‘tensor’ or ‘numpy.ndarray’ types instead? Thank you!

Hi @leongfy!

Could you post a minimal code example, showing the behaviour you are describing? That would be a big help in terms of diagnosing what is happening :slight_smile:

Hi!

Thank you for your prompt reply. I was trying to export the ratio of the probability outputs from two circuits between cost function evaluations, but later found I could not work with the autograd arrayboxes, something like:

#--------------------------------------------------------
from pennylane import numpy as np
import pennylane as qml

dev0 = qml.device('default.qubit', wires=1)
dev1 = qml.device('default.qubit', wires=1)

@qml.qnode(dev0)
def circuit_0(params,wires=0):
    qml.RX(params[0],wires=wires)
    return qml.probs(wires=0)

@qml.qnode(dev1)
def circuit_1(params,wires=0):
    qml.RX(params[0],wires=wires)
    qml.Hadamard(wires=0)
    return qml.probs(wires=0)

def cost_fn(params):    
    cost_0 = sum(circuit_0(params,wires=0))
    cost_1 = sum(circuit_1(params,wires=0))    
    cost_n.append(cost_0/cost_1) # <<<<<<<<<<<<<<<<<<    
    return -cost_0**2/cost_1

params = [0.0]
max_iterations = 1
step_size=0.01
cost_n = [  ] # <<<<<<<<<<<<<<<<<<

opt = qml.GradientDescentOptimizer(stepsize=step_size)

for n in range(max_iterations):
    params = opt.step(cost_fn, params)
    
print('cost_n =',cost_n, type(cost_n)) # <<<<<<<<<<<<<<<<<<
#--------------------------------------------------------

cost_n = [<autograd.numpy.numpy_boxes.ArrayBox object at 0x000002A12B335840>] <class 'list'>

Thanks @leongfy, I see the issue now :slight_smile: This arises because cost_n is computed via a side effect of your cost function. That is, your cost function is updating a global variable, rather than returning the cost.

When using Autograd with PennyLane, cost functions must be pure — they cannot perform side effects (such as updating external variables), otherwise:

  • You will see ArrayBox objects rather than NumPy arrays, and
  • The values stored via the side-effect will no longer be differentiable.

In this particular case, since you are not using computing the gradient of the side effect, it should be okay. You can use the qml.math.toarray() function to convert any ArrayBox to a NumPy array:

def cost_fn(params):
    cost_0 = sum(circuit_0(params, wires=0))
    cost_1 = sum(circuit_1(params, wires=0))
    cost_n.append(qml.math.toarray(cost_0 / cost_1))
    return -(cost_0 ** 2) / cost_1

This should now show you numeric values when you print cost_n :slight_smile:

Note that the PennyLane optimizers also have the method step_and_cost(), which allows you to extract the new parameter values and the cost function:

params = np.array([0.0], requires_grad=True)
max_iterations = 10
step_size = 0.01
cost_n = []

opt = qml.GradientDescentOptimizer(stepsize=step_size)

for n in range(max_iterations):
    params, cost = opt.step_and_cost(cost_fn, params)
    print(f"Step: {n}  Cost: {cost}")


print("cost_n =", cost_n, type(cost_n))
1 Like

Hi @leongfy,

You can also grab the values from an ArrayBox by using its “_value” attribute [Source].

For example,

print('cost_n =',cost_n._value, type(cost_n._value))

Hope that helps!!! :smile:

Hi @vishwa,

What you propose can have undesired side effects, especially when you’re calculating gradients. I’m not sure whether it will cause problems or not in this particular case but in general it’s important to be careful when doing this :grinning:.