Pennylane numpy tensor not retuned properly with and Dask future

Hello! If applicable, put your complete code example down below. Make sure that your code:

  • is 100% self-contained — someone can copy-paste exactly what is here and run it to
    reproduce the behaviour you are observing
  • includes comments

from pennylane import numpy as np
from distributed import Client, LocalCluster

def get_init_params(init_params):
    return np.array(init_params, requires_grad=True)

>>> k
tensor([0.1, 0.1], requires_grad=True)

But when the same function executed via dask 

# Create a Dask local cluster
cluster = LocalCluster()
client = Client(cluster)

# Submit the get_init_params function to the Dask cluster
init_params = [0.1, 0.1]
m = client.submit(get_init_params, init_params)
<Future: finished, type: pennylane.numpy.tensor.tensor, key: get_init_params-0b432885461f19cb0e6fafddb426338a>
>>> m.result()
array([0.1, 0.1])

##it doesn't return tensor array

Any thoughts why this could happen? and how do I fix this?

Hi @QuantumMan

In this case, I believe the Dask serializer/deserializer rules here are causing the discrepancy you see. When you run from pennylane import numpy as np you are using the autograd numpy-extended representation. Since autograd extends Numpy operations, as far as Dask is concerned, these operations may be just bare Numpy, without anything extra on top. As a result, Dask using its in-built numpy serialiser and deserialiser to wrap up and unpack the operation, likely strips the parts added by autograd, and sends this message without the pieces you are requiring.

You can try updating the Dask serialisers by playing with the options mentioned here, which could allow you to preserve the data you are interested in. Once dask is convinced this is not a bare numpy array, it should (hopefully) allow you to preserve the type.

You could also try using the JAX or PyTorch interfaces, as those may have more recent support by Dask. Feel free to let us know how this goes.

thanks i could run in it locally and clearly this is the fix. Against LocalCluster it just worked, but when I tried to do similar with SSH Cluster, by passing the serialization & deserization keyword arguments , during the Dask client initialization, I didn’t see the effect. Any thoughts on how this works on remote clusters. I understand its a question to Dask, but probably a section in pennylane might be a good idea as Dask is one engine pretty heavily used for distributed quantum computations.

Thanks for the suggestion @QuantumMan!

Let me check with the team to see if we can give you any pointers on this.

Hi @QuantumMan, are you able to provide your working script and your non-working script so that we can explore the issue?

As an alternative you can also try using Ray instead, as the issue probably won’t be present there.