ImportError: cannot import name 'shape'

I’m trying to import Pennylane for the first time and I keep getting this error for ‘import pennylane as qml’:


ImportError Traceback (most recent call last)
in
----> 1 import pennylane as qml

/usr/local/lib/python3.6/dist-packages/pennylane/init.py in
30 import pennylane.operation
31 import pennylane.qnn
—> 32 import pennylane.templates
33 from pennylane._device import Device, DeviceError
34 from pennylane._grad import grad, jacobian, finite_diff

/usr/local/lib/python3.6/dist-packages/pennylane/templates/init.py in
18 from .broadcast import *
19 from .decorator import *
—> 20 from .layer import *
21 from .layers import *
22 from .embeddings import *

/usr/local/lib/python3.6/dist-packages/pennylane/templates/layer.py in
17 # pylint: disable-msg=too-many-branches,too-many-arguments,protected-access
18 from pennylane.templates.decorator import template as temp
—> 19 from pennylane.math import shape
20
21

ImportError: cannot import name ‘shape’


EDIT: I just found a solution. For some reason the later package versions all have import errors. If anyone else gets this run:

pip uninstall pennylane --yes

pip install pennylane==0.12.0

Best of luck!

Hi @J_Sheppard, welcome to the forum!

Thank you very much for adding the solution. We will look into why this is happening.

Don’t hesitate to post any new questions that you may have!

Hi @J_Sheppard, the problem is happening because you have Python version 3.6.

Try downloading a newer Python version and then the latest PennyLane releases should work.

Please let me know if this helps!

Got the same error. So, created a new environment with Python3.8 and it imported fine> Sharing the commands used.
​conda create -n penny python=3.8
conda activate penny
​ pip install pennylane --upgrade
Thank you

Hi @raghavv, I’m glad you were able to solve it!

I do recommend that you always use a virtual environment with your projects.

Let us know if you have any further questions and enjoy using PennyLane!

thank you @CatalinaAlbornoz for the kind information. I have been trying to run some quantum chemistry calculation, will let you know . Thank you.

Just an FYI, on the installation webpage https://pennylane.ai/install.html?version=stable the banner still says “PennyLane supports Python 3.6 or newer.” – this might be confusing for some people doing the install

oops, nice catch @EvanPeters, I’ll correct this.

Hello @CatalinaAlbornoz ! I too am facing the same issue what @J_Sheppard has mentioned! I have python version=3.10.8 & pennylane= 0.28.0. I also tried to degrade the version to 0.19.0 but i face new error saying
File “C:\Users\vaishnavi.chandilkar\Anaconda3\envs\maus\lib\site-packages\pennylane\math\single_dispatch.py”, line 293, in
del ar.autoray._FUNCS[“torch”, “linalg.eigh”]
KeyError: (‘torch’, ‘linalg.eigh’).
How can fix the error?

Hi @Nina,

Usually having the latest Python and PennyLane versions is best. Using Python 3.10.8 and PennyLane 0.28 shouldn’t cause any problems. I would recommend that you follow these steps:

Create a new conda environment with:
conda create --name name_of_your_environment python=3.10.8
Activate the environment:
conda activate name_of_your_environment
After this you can install the needed packages:
python -m pip install pennylane

Please let me know if this works for you!

Ok. I will try to upgrade and see how it works!!

I am facing another problem with respect to shape of the features:
I am passing features of size (9344,) to Amplitude embedding with padding value = 7440. When i perform reshape, it throws an error:

Attempt to convert a value (AmplitudeEmbedding(<tf.Tensor: shape=(16384,), dtype=complex128, numpy=
array([0.        +0.j, 0.        +0.j, 0.        +0.j, ...,
       0.01191828+0.j, 0.01191828+0.j, 0.01191828+0.j])>, wires=[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13])) with an unsupported type (<class 'pennylane.templates.embeddings.amplitude.AmplitudeEmbedding'>) to a Tensor.

My code is

def circuit(inputs):
            output1 = qml.AmplitudeEmbedding(features=inputs, wires=range(14), pad_with=7040, normalize=True)
            output = tf.reshape(output1, (-1, 73,128)) 
            return qml.expval(qml.PauliZ(0))

My input feature has shape (9334, ), pad_with = 2^14 - 9344 = 7040. using numpy also throw same error. Can you tell me how do i reshape my features into (None, 73,128) for batch_size = 8?

Hi @Nina,

Notice that qml.AmplitudeEmbedding embeds your data into the circuit. It’s not something that returns an output that you would want to use. You can learn more about this and other templates in the PennyLane docs!

If you want to reshape the measurement output of your circuit, you will need to do it outside of the circuit.

Here’s a simple example with a reshape:

# We import our favourite libraries
import pennylane as qml
from pennylane import numpy as np

# We create a device
dev = qml.device('lightning.qubit',wires=4)

# We create a qnode
@qml.qnode(dev)
def circuit(params):
    # We add some gates
    # Notice that the params argument can be a tensor
    qml.RX(params[0,0],wires=0)
    qml.RX(params[0,1],wires=1)
    qml.RX(params[1,0],wires=2)
    qml.RX(params[1,1],wires=3)
    # We return a single measurement or a list of measurements
    return [qml.expval(qml.PauliZ(0)),qml.expval(qml.PauliZ(1)),qml.expval(qml.PauliX(2)),qml.expval(qml.PauliX(3))]

# We define a value for our parameters
params = np.array([[0,1],[2,3]],requires_grad=True)

# We run the circuit and store the output in a variable
measurement = circuit(params)
# We can reshape the measurement output
reshaped_measurement = measurement.reshape(2,2)

print('Measurement: ',measurement)
print('reshaped_measurement: ',reshaped_measurement)

As you can see the measurement is a list and then outside of the circuit we reshape the output.

Please let me know if this helps!

But for batched tensor, the above mentioned method is not working. For example:

N = 14
n_layers = 3
wires = range(N)
dev = qml.device("default.qubit", wires=n_qubits)

@qml.qnode(dev)
def circuit(inputs):
    qml.AmplitudeEmbedding(features=inputs, wires=wires, pad_with=7040, normalize=True)
    return (qml.expal(qml.PauliZ(wires=i)) for i in range(n_qubits))

q = tf.ones(shape=(2,73,128))
a = Flatten()(q)
d = circuit(a)

When i print d value, it gives following error:

~\Anaconda3\envs\maus\lib\site-packages\pennylane\templates\embeddings\amplitude.py in _preprocess(features, wires, pad_with, normalize)
    181 
    182         if batched and qml.math.get_interface(features) == "tensorflow":
--> 183             raise ValueError("AmplitudeEmbedding does not support batched Tensorflow features.")
    184 
    185         features_batch = features if batched else [features]

ValueError: AmplitudeEmbedding does not support batched Tensorflow features.

If i print shape of d,it gives me (2,). Why does it take only batch size? Is there any alternative method where i can use batched features as input?

Hi @Nina,

Thank you for adding these details. As the error message mentions, AmplitudeEmbedding does not support batched Tensorflow features. This means that you should change one of the following:

  • Option 1: Change the embedding from AmplitudeEmbedding to something else such as AngleEmbedding
  • Option 2: Change from using TensorFlow to using PyTorch. Do you absolutely need to use TensorFlow?
  • Option 3: Keep using AmplitudeEmbedding and TensorFlow but stop using batches.

Here’s a code example for option 1:

import pennylane as qml
import tensorflow as tf

n_qubits = 2
dev = qml.device("default.qubit", wires=n_qubits)

@qml.qnode(dev, interface="tf")
def circuit(inputs):
    qml.AngleEmbedding(features=inputs, wires=range(n_qubits))
    return [qml.expval(qml.PauliZ(wires=i)) for i in range(n_qubits)]

q = tf.ones(shape=(2,73,128))
a = tf.reshape(q,[73*128,2])
d = circuit(a)

Note that the last dimension in ‘a’ needs to be smaller or equal to the number of qubits. In this case 2 qubits.

Also note that I changed some other details such as a typo you had in ‘expval’ and changing the return statement to be a list instead of a tuple.

Please let me know if this helps!

Thank you @CatalinaAlbornoz . I will try to opt these options and see how it works!

Hello, I have also encountered this issue in recent experiments. I am disappointed and confused that AmplitudeEmbedding cannot be used with batch in TensorFlow, as I am more accustomed to the usage of model.fit in Keras. Although I can use a for loop as a workaround for training, I have found that this significantly increases the training time for the model.
Out of curiosity, I would like to inquire why PennyLane did not choose to support batch mode for AmplitudeEmbedding in TensorFlow. Is this a feature that is planned for future updates?
(I noticed that AmplitudeEmbedding seems to be able to achieve batch computations in general cases by adding @qml.batch_params, so is there any additional code implementation issue involved?)

Hey @jracle! Welcome to the forum :sun_with_face:

Thanks for your feedback! We have a feature request open and on our radar for this exact issue: Allow parameter broadcasting in `AmplitudeEmbedding` with TensorFlow · Issue #2976 · PennyLaneAI/pennylane · GitHub

I’ll make sure to put a comment there that we got another request for it. I’ll make sure to update this thread when anything major happens :slight_smile:

Thank you @isaacdevlugt .I will also subscribe to this issue. Thank you once again for team’s hard work!

I opened a pull request implementing batching with Tensorflow (including JIT compilation in Tensorflow): #4818