QLSTM List out of Index

I don’t think we came to a resolution w.r.t. @lingling36109’s original post, so I can’t say for certain that QNode returns were the culprit :slight_smile:. But, based on the error you shared with me, this doesn’t seem like a QNode returns error.

Your VQC function has the source of the error. The index i is looping over values of 0 all the way to num_qubits - 1. Does ry_params have a shorter length than num_qubits? Same question for wires_type? You can check this by printing len(ry_params) or len(wires_type).

        def VQC(features, weights, wires_type):
            # Preproccess input data to encode the initial state.
            #qml.templates.AngleEmbedding(features, wires=wires_type)
            ry_params = [torch.arctan(feature) for feature in features]
            rz_params = [torch.arctan(feature**2) for feature in features]
            for i in range(self.n_qubits):
                qml.RY(ry_params[i], wires=wires_type[i])
                qml.RZ(ry_params[i], wires=wires_type[i])
            #Variational block.
            qml.layer(ansatz, self.n_qlayers, weights, wires_type = wires_type)

You are right. I see that the length of the features array that’s being passed into the VQC function from the circuit_forget function is 1 which is causing the error. Now I’m trying to trace back the origin of these inots or where is it being declared from the error message. It is directing me to the qnode.py file which is quite confusing.

def forward(self, inputs):  # pylint: disable=arguments-differ
    """Evaluates a forward pass through the QNode based upon input data and the initialized

        inputs (tensor): data to be processed

        tensor: output data
    has_batch_dim = len(inputs.shape) > 1

    # in case the input has more than one batch dimension
    if has_batch_dim:
        batch_dims = inputs.shape[:-1]
        inputs = torch.reshape(inputs, (-1, inputs.shape[-1]))

    # calculate the forward pass as usual
    results = self._evaluate_qnode(inputs)

    if isinstance(results, tuple):
        if has_batch_dim:
            results = [torch.reshape(r, (*batch_dims, *r.shape[1:])) for r in results]
        return torch.stack(results, dim=0)

    # reshape to the correct number of batch dims
    if has_batch_dim:
        results = torch.reshape(results, (*batch_dims, *results.shape[1:]))
    return results

I found this code in the torch.py file. I triedprinting the inputsin this code and I get a tensor of length 4. Is something happening here that is changing the length of the inouts tensor?

Hi @Siva_Karthikeya,

I don’t think the issue is in qnode.py. I would encourage you to print out the shape of the inputs/features that are being passed into your VQC function at different steps. Maybe all you need to do is pre-process your data so that it’s sent in the right shape. Try testing this with a small dataset where you know the sizes for sure. It’s possible that your current dataset doesn’t look like you expected.

I hope this helps.

Hello @CatalinaAlbornoz ,

I found out the issue:

def VQC(features, weights, wires_type):
# Preproccess input data to encode the initial state.
#qml.templates.AngleEmbedding(features, wires=wires_type)
print(“Features lenght in VQC”,len(features))
ry_params = [torch.arctan(feature) for feature in features]
print(“ry_params lenght in VQC”,len(ry_params))
print(“wires_type lenght in VQC”,len(wires_type))
rz_params = [torch.arctan(feature**2) for feature in features]

In the above code, features is a tensor with shape (1, 4). However, in the loop, when we want to access each element of the tensor, we cant just use “for feature in features” as the loop perceives features as an array of size 1. So,to access the 4 elements in the tensor, I had to use features.squeeze(). The code worked after that change. Thanks for the support @isaacdevlugt @CatalinaAlbornoz :slight_smile:

1 Like

@Siva_Karthikeya Thank you for pointing this out. Can you also please let share the run time? How long does the QLSTM section run for you?

It’s great to see that you solved it @Siva_Karthikeya! Thanks for posting the solution here!