Error with KerasLayer and Amplitude Embedding

Hi there! I’m getting an error when trying to create a Keras layer with AmplitudeEmbedding. It has to do with casting the input to complex128. Here’s my code:

import pennylane as qml
from pennylane import numpy as qnp
from pennylane.transforms import richardson_extrapolate, fold_global
import numpy as np
import tensorflow as tf

# Hyperparameters of the circui
nqbits=4
depth=1

# Device definition
dev_ideal = qml.device('default.mixed', wires=nqbits)
dev_mixed = qml.transforms.insert(dev_ideal, qml.DepolarizingChannel, 0.1) # Adding noise

@qml.transforms.mitigate_with_zne([1, 2, 3], fold_global, richardson_extrapolate) # Adding error mitigation
@qml.qnode(dev_mixed)
def mitigated_qnode(inputs, weights):
    qml.AmplitudeEmbedding(features=inputs, wires=range(nqbits),normalize=True)
    qml.templates.StronglyEntanglingLayers(weights, wires=range(nqbits))
    return [qml.expval(qml.PauliZ(wires=i)) for i in range(nqbits)]

# Create keras model
weight_shapes = {"weights": (depth, nqbits,3)}
qlayer = qml.qnn.KerasLayer(mitigated_qnode, weight_shapes, output_dim=nqbits)

# Make a prediction
inputs = qnp.random.normal(2, 4,(1, 2**nqbits), requires_grad=False)
qlayer(inputs)

And here’s the full error message below:

ValueError                                Traceback (most recent call last)
<ipython-input-32-37ce10b2c2b7> in <cell line: 29>()
     27 # Make a prediction
     28 inputs = qnp.random.normal(2, 4,(1, 2**nqbits), requires_grad=False)
---> 29 qlayer(inputs)

15 frames
/usr/local/lib/python3.10/dist-packages/pennylane/math/single_dispatch.py in <lambda>(x, **kwargs)
    266 
    267 ar.register_function(
--> 268     "tensorflow", "asarray", lambda x, **kwargs: _i("tf").convert_to_tensor(x, **kwargs)
    269 )
    270 ar.register_function(

ValueError: Exception encountered when calling layer "keras_layer_7" (type KerasLayer).

Tensor conversion requested dtype complex128 for Tensor with dtype float32: <tf.Tensor: shape=(16,), dtype=float32, numpy=
array([ 0.32765773,  0.04465814, -0.05177495, -0.19328678,  0.28668392,
        0.40516296, -0.09485355, -0.10624942,  0.05358212,  0.16164567,
       -0.00984542,  0.45717812,  0.21692467,  0.3813239 , -0.3816654 ,
        0.08793645], dtype=float32)>

Call arguments received:
  • inputs=tf.Tensor(shape=(1, 16), dtype=float32)

I’m using Google Colab to test this, here’s the output of qml.about:

Name: PennyLane
Version: 0.34.0
Summary: PennyLane is a Python quantum machine learning library by Xanadu Inc.
Home-page: https://github.com/PennyLaneAI/pennylane
Author: 
Author-email: 
License: Apache License 2.0
Location: /usr/local/lib/python3.10/dist-packages
Requires: appdirs, autograd, autoray, cachetools, networkx, numpy, pennylane-lightning, requests, rustworkx, scipy, semantic-version, toml, typing-extensions
Required-by: PennyLane-Lightning

Platform info:           Linux-6.1.58+-x86_64-with-glibc2.35
Python version:          3.10.12
Numpy version:           1.23.5
Scipy version:           1.11.4
Installed devices:
- lightning.qubit (PennyLane-Lightning-0.34.0)
- default.gaussian (PennyLane-0.34.0)
- default.mixed (PennyLane-0.34.0)
- default.qubit (PennyLane-0.34.0)
- default.qubit.autograd (PennyLane-0.34.0)
- default.qubit.jax (PennyLane-0.34.0)
- default.qubit.legacy (PennyLane-0.34.0)
- default.qubit.tf (PennyLane-0.34.0)
- default.qubit.torch (PennyLane-0.34.0)
- default.qutrit (PennyLane-0.34.0)
- null.qubit (PennyLane-0.34.0)

Thanks!!

Hey @Laia_Domingo,

Thanks! I was able to replicate your error and was a little perplexed. I tried running your code without ZNE stuff and with regular default.qubit and it worked, but simply using default.mixed instead is causing problems. You’ve found a bug! :bug:

I made an issue on our repository: [BUG] Tensorflow type promotion error with `default.mixed` · Issue #5085 · PennyLaneAI/pennylane · GitHub. We will update you here when there’s a fix!

1 Like

Hi @isaacdevlugt,

Thank you for your help! :slight_smile:

Hi again!

Unable to create a Keras layer with ZNE, I attempted to create a Torch layer instead, encountering a new issue. The code functions correctly when the batch size is set to 1, but an error arises when I increase the batch size beyond 1. While such errors are typically associated with qnodes not utilizing input parameters, this is not the source of the problem in this scenario. Interestingly, removing the ZNE component resolves the error. Here’s my code:

import pennylane as qml
from pennylane import numpy as qnp
from pennylane.transforms import richardson_extrapolate, fold_global
import numpy as np
import torch

# Hyperparameters of the circui
nqbits=4
depth=1
batch_size = 2

# Device definition
dev_ideal = qml.device('default.mixed', wires=nqbits)
dev_mixed = qml.transforms.insert(dev_ideal, qml.DepolarizingChannel, 0.1) # Adding noise


@qml.transforms.mitigate_with_zne([1, 2, 3], fold_global, richardson_extrapolate) # Adding error mitigation
@qml.qnode(dev_mixed)
def mitigated_qnode(inputs, weights):
    qml.AmplitudeEmbedding(features=inputs, wires=range(nqbits),normalize=True)
    qml.templates.StronglyEntanglingLayers(weights, wires=range(nqbits))
    return [qml.expval(qml.PauliZ(wires=i)) for i in range(nqbits)]

# Creating pytorch quantum layer
weight_shapes = {"weights": (depth, nqbits,3)}
qlayer = qml.qnn.TorchLayer(mitigated_qnode, weight_shapes)

# Running layer with batch_size >1
inputs = torch.tensor(qnp.random.normal(2, 4,(batch_size, 2**nqbits)))
qlayer(inputs)

And here’s the error I get:

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
Cell In[46], line 30
     28 # Running layer with batch_size >1
     29 inputs = torch.tensor(qnp.random.normal(2, 4,(batch_size, 2**nqbits)))
---> 30 qlayer(inputs)

File c:\Users\laiad\anaconda3\envs\Quantum4\Lib\site-packages\torch\nn\modules\module.py:1501, in Module._call_impl(self, *args, **kwargs)
   1496 # If we don't have any hooks, we want to skip the rest of the logic in
   1497 # this function, and just call forward.
   1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
   1499         or _global_backward_pre_hooks or _global_backward_hooks
   1500         or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501     return forward_call(*args, **kwargs)
   1502 # Do not call functions when jit is used
   1503 full_backward_hooks, non_full_backward_hooks = [], []

File c:\Users\laiad\anaconda3\envs\Quantum4\Lib\site-packages\pennylane\qnn\torch.py:402, in TorchLayer.forward(self, inputs)
    399     inputs = torch.reshape(inputs, (-1, inputs.shape[-1]))
    401 # calculate the forward pass as usual
--> 402 results = self._evaluate_qnode(inputs)
    404 # reshape to the correct number of batch dims
    405 if has_batch_dim:

File c:\Users\laiad\anaconda3\envs\Quantum4\Lib\site-packages\pennylane\qnn\torch.py:423, in TorchLayer._evaluate_qnode(self, x)
...
-> 1100     return _VF.tensordot(a, b, dims_a, dims_b)  # type: ignore[attr-defined]
   1101 else:
   1102     return _VF.tensordot(a, b, dims_a, dims_b, out=out)

RuntimeError: contracted dimensions need to match, but first has size 3 in dim -1 and second has size 4 in dim -2

Any idea of what is happening here? Thanks!!

Hey @Laia_Domingo,

Everything works fine without the error mitigation transform, but when it’s introduced then we get problems. I’ll get back to you on this :thinking:

Hey @Laia_Domingo,

If you add the qml.transforms.broadcast_expand transform it should work.

# Hyperparameters of the circui
nqbits=4
depth=1
batch_size = 2

# Device definition
dev_ideal = qml.device('default.mixed', wires=nqbits)
dev_mixed = qml.transforms.insert(dev_ideal, qml.DepolarizingChannel, 0.1) # Adding noise


#@qml.qnode(dev_ideal)
@qml.transforms.mitigate_with_zne([1, 2, 3], fold_global, richardson_extrapolate) # Adding error mitigation
@qml.transforms.broadcast_expand
@qml.qnode(dev_mixed)
def mitigated_qnode(inputs, weights):

    qml.AmplitudeEmbedding(features=inputs, wires=range(nqbits),normalize=True)
    qml.templates.StronglyEntanglingLayers(weights, wires=range(nqbits))

    return [qml.expval(qml.PauliZ(wires=i)) for i in range(nqbits)]

# Creating pytorch quantum layer
weight_shapes = {"weights": (depth, nqbits, 3)}
qlayer = qml.qnn.TorchLayer(mitigated_qnode, weight_shapes)

# Running layer with batch_size >1
inputs = torch.tensor(qnp.random.normal(2, 4,(batch_size, 2**nqbits)))
qlayer(inputs)

The problem is that error mitigation doesn’t support broadcasting — a bug! 2 bugs in one forum post… excellent work :man_detective:! I made a bug report here and we’ll make sure to update you with progress when it happens.

1 Like

This PR should fix it:

1 Like

This PR should fix it: