Batching both parameters and input

Hi, with PennyLane we can perform a circuit forward pass with a batch of input states with shape (batch_size, 2**number_of_qubits).

I am trying to batch parmeters as well, using @qml.batch_params, I need the parameters batch to correspond to the states batch (first state processed with first parameters, and so on). Right now I am using StronglyEntanglingLayer, hence parameters have shape (batch_size, number_of_layers, number_of_qubits, 3).

It seems this doesn’t work, as the output state has a new batch dimension (output shape is (batch_size, batch_size, 2**number_of_qubits).

Am I doing something wrong or this is not intended to work this way? Thanks!

Script below to reproduce the issue.

import torch
import pennylane as qml

number_of_layers = 3
number_of_qubits = 3
batch_size = 2

@qml.batch_params
@qml.qnode(Sdev, interface="torch")
def circuit(params, state):

    # load initial state
    qml.QubitStateVector(state, wires=range(number_of_qubits))

    # apply circuit
    qml.StronglyEntanglingLayers(params, wires=range(number_of_qubits), ranges = [1]*params.shape[-3])

    # return state vector
    return qml.state()

# define a batch of random states and normalise
state = torch.rand(batch_size, 2**number_of_qubits)
state = state/torch.linalg.vector_norm(state, dim = -1).view(-1, 1)

# define a batch of trainable parameters
S1params = torch.rand(batch_size, number_of_layers, number_of_qubits, 3).requires_grad_()

# apply circuit
output = circuit(S1params, state)

print(output.shape)

Hey @Andrea! Welcome to the forum :rocket:

If I’m understanding what you want your desired output to be, then I don’t think qml.batch_params is necessary:

import torch
import pennylane as qml

number_of_layers = 3
number_of_qubits = 3
batch_size = 2

dev = qml.device("default.qubit.legacy", wires=number_of_qubits)

@qml.qnode(dev)
def circuit(params, state):

    # load initial state
    qml.QubitStateVector(state, wires=range(number_of_qubits))

    # apply circuit
    qml.StronglyEntanglingLayers(params, wires=range(number_of_qubits), ranges = [1]*params.shape[-3])

    # return state vector
    return qml.state()

# define a batch of random states and normalise
state = torch.rand(batch_size, 2**number_of_qubits)
state = state/torch.linalg.vector_norm(state, dim = -1).view(-1, 1)

# define a batch of trainable parameters
S1params = torch.rand(batch_size, number_of_layers, number_of_qubits, 3).requires_grad_()

# apply circuit
output = circuit(S1params, state)

print(output.shape)
print(dev.num_executions)
torch.Size([2, 8])
1

Let me know if this helps!

Thank you very much, this is exactly what I needed. However if I increase the batch size (e.g. 32), I get the following error

Traceback (most recent call last):
  File "/raid/home/username/ConditionalDiffusionTimeEmbedding/Modules/pennylane_functions.py", line 90, in <module>
    output = circuit(S1params, state)
  File "/raid/home/username/.local/lib/python3.10/site-packages/pennylane/qnode.py", line 1039, in __call__
    res = qml.execute(
  File "/raid/home/username/.local/lib/python3.10/site-packages/pennylane/interfaces/execution.py", line 648, in execute
    results = inner_execute(tapes)
  File "/raid/home/username/.local/lib/python3.10/site-packages/pennylane/interfaces/execution.py", line 258, in inner_execute
    tapes = tuple(expand_fn(t) for t in tapes)
  File "/raid/home/username/.local/lib/python3.10/site-packages/pennylane/interfaces/execution.py", line 258, in <genexpr>
    tapes = tuple(expand_fn(t) for t in tapes)
  File "/raid/home/username/.local/lib/python3.10/site-packages/pennylane/interfaces/execution.py", line 218, in device_expansion_function
    return device.expand_fn(tape, max_expansion=max_expansion)
  File "/raid/home/username/.local/lib/python3.10/site-packages/pennylane/_device.py", line 718, in expand_fn
    return self.default_expand_fn(circuit, max_expansion=max_expansion)
  File "/raid/home/username/.local/lib/python3.10/site-packages/pennylane/_device.py", line 689, in default_expand_fn
    circuit = _local_tape_expand(
  File "/raid/home/username/.local/lib/python3.10/site-packages/pennylane/_device.py", line 85, in _local_tape_expand
    obj = QuantumScript(obj.decomposition(), _update=False)
  File "/raid/home/username/.local/lib/python3.10/site-packages/pennylane/operation.py", line 1261, in decomposition
    return self.compute_decomposition(
  File "/raid/home/username/.local/lib/python3.10/site-packages/pennylane/templates/layers/strongly_entangling.py", line 210, in compute_decomposition
    weights[..., l, i, 0],
IndexError: index 3 is out of bounds for dimension 1 with size 3

Ah! Maybe I’m missing something, but you’re passing one too many dimensions for the parameters for your StronglyEntanglingLayer operator :slight_smile:. This should be the parameters’ shape: (batch_size, num_layers, num_qubits).

import torch
import pennylane as qml

number_of_layers = 3
number_of_qubits = 3
batch_size = 10

dev = qml.device("default.qubit", wires=number_of_qubits)

@qml.qnode(dev)
def circuit(params, state):

    # load initial state
    qml.QubitStateVector(state, wires=range(number_of_qubits))

    # apply circuit
    qml.StronglyEntanglingLayers(params, wires=range(number_of_qubits), ranges = [1]*params.shape[-3])

    # return state vector
    return qml.state()

# define a batch of random states and normalise
state = torch.rand(batch_size, 2**number_of_qubits)
state = state/torch.linalg.vector_norm(state, dim = -1).view(-1, 1)

# define a batch of trainable parameters
S1params = torch.rand(batch_size, number_of_layers, number_of_qubits).requires_grad_()

# apply circuit
with qml.Tracker(dev) as t:
    output = circuit(S1params, state)

print(output.shape)
print(t.latest)
torch.Size([10, 8])
{'simulations': 1, ...

Note: In my previous post I used default.qubit.legacy, which is our old device API. Our new device API (accessible by using default.qubit) also supports broadcasting. The 'simulations' value shows the number of times the device ran something (once, meaning broadcasting happened :slight_smile:).

Let me know if this helps!