Pennylane torch layer outputs Nan after few iterations

Hello! I would like to try hybrid QNN model by using torch qlayer. However, after I apply qlayer into the model, it outputs Nan and can not calculate the loss value because it is not in between 0 and 1. Is there any problems about how I add qlayer to the model?


class Classical_CNN(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv1d(1, 32, kernel_size=5, stride=1)
        self.act1 = nn.ReLU()
        self.pool1 = nn.MaxPool1d(kernel_size=3)
        self.conv2 = nn.Conv1d(32, 16, kernel_size=3, stride=1)
        self.pool2 = nn.MaxPool1d(kernel_size=3)
        self.act2 = nn.ReLU()
        self.flat = nn.Flatten()
        # self.fc2 = nn.Linear(32, 5)
        self.qlayer = qml.qnn.TorchLayer(qnode, weight_shapes,init_method)
        self.fc3 = nn.Linear(5, 1)
        self.act3 = nn.ReLU()
        self.act4 = nn.Sigmoid()

    def forward(self, x):
        # input 1x28x1, output 32x24x1
        x = self.act1(self.conv1(x))
        print(x)
        # input 32x24x1, output 32x8x1
        x = self.pool1(x)
        print(x)
        # input 32x8x1, output 16x6x1
        x = self.act2(self.conv2(x))
        # input 16x6x1, output 16x2x1
        print(x)
        x = self.pool2(x)
        print(x)
        # input 16x2x1, output 32
        x = self.flat(x)
        print(x)
        # input 32, output 1
        x = self.qlayer(x)
        print(x)
        x = self.fc3(x)
        print(x)
        x = self.act4(x)
        return x


model= Classical_CNN().double()

loss_fn = nn.BCELoss()
optimizer = optim.Adam(model.parameters(), lr=0.0001)
r=0
n_epochs = 2
print('training started:')
for epoch in range(n_epochs):
    acct = 0
    countt = 0
    for inputs, labels in trainloader:
        # forward, backward, and then weight update

        y_pred = model(inputs)
        loss = loss_fn(y_pred, labels)
        acct += (torch.round(y_pred) == labels).float().sum()
        countt += len(labels)
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()

        if r % 10 ==0:
            print('training', r,loss)
        r+=1
    acct /= countt
    print("Epoch %d: model training accuracy %.2f%%" % (epoch, acct * 100))

    acc = 0
    count = 0
    for inputs, labels in testloader:
        y_pred = model(inputs)
        acc += (torch.round(y_pred) == labels).float().sum()
        count += len(labels)
    acc /= count
    print("Epoch %d: model accuracy %.2f%%" % (epoch, acc*100))

If you want help with diagnosing an error, please put the full error message below:

training 2800 tensor(0.3866, dtype=torch.float64, grad_fn=<BinaryCrossEntropyBackward0>)
training 2810 tensor(0.3632, dtype=torch.float64, grad_fn=<BinaryCrossEntropyBackward0>)
training 2820 tensor(0.3815, dtype=torch.float64, grad_fn=<BinaryCrossEntropyBackward0>)
training 2830 tensor(0.3988, dtype=torch.float64, grad_fn=<BinaryCrossEntropyBackward0>)
training 2840 tensor(0.3832, dtype=torch.float64, grad_fn=<BinaryCrossEntropyBackward0>)
training 2850 tensor(0.3528, dtype=torch.float64, grad_fn=<BinaryCrossEntropyBackward0>)
Traceback (most recent call last):
  ..........
RuntimeError: all elements of input should be between 0 and 1

I try to find out the result why get Nan value. I firstly print the parameters of each layer:

conv1.weight Parameter containing:
tensor([[[ 1.1959e-01, -1.1707e-01,  9.8770e-02,  2.5084e-01, -2.8019e-01]],
        [[-1.7897e-01, -3.3980e-03,  3.9406e-01,  4.1249e-01,  3.3862e-01]],
        [[ 1.2769e-01, -2.2120e-01,  9.7287e-02,  2.1307e-03, -1.6581e-01]],
        [[-2.1863e-01,  5.1576e-02, -9.1628e-02,  4.3487e-01, -2.5362e-01]],
        [[-4.3861e-01, -2.4873e-01,  3.6974e-01, -2.1052e-01, -1.8482e-01]],
        [[ 4.6734e-01,  3.2042e-01,  3.8113e-02,  2.5410e-01,  1.7549e-01]],
        [[-3.9386e-01,  3.5078e-01, -3.5662e-01, -3.5321e-01, -2.2325e-01]],
        [[-1.0859e-01, -2.2519e-01,  1.0696e-01, -1.7628e-01,  2.5859e-01]],
        [[ 1.7423e-01, -4.3598e-01,  4.4256e-01,  3.1433e-01,  2.2158e-01]],
        [[ 3.5199e-01,  3.1536e-01, -2.6921e-01, -1.8752e-01, -8.8039e-02]],
        [[ 1.0377e-01, -1.1039e-01, -1.5969e-02,  1.1568e-01, -3.1932e-02]],
        [[ 2.5751e-01, -2.1263e-01, -3.2091e-01, -2.4839e-01, -3.3312e-01]],
        [[ 3.6777e-02, -5.3736e-02,  3.5256e-01,  3.4794e-01, -1.6691e-01]],
        [[-1.1993e-01,  3.8151e-01, -1.3974e-01, -3.4712e-01,  1.8626e-01]],
        [[-5.3700e-05, -2.6366e-01,  1.7001e-01, -5.0707e-01,  1.1185e-01]],
        [[ 3.6781e-01,  1.5033e-01, -7.9783e-02,  3.5219e-01,  2.8895e-01]],
        [[ 4.0043e-02, -4.9959e-01, -1.0740e-01,  4.5318e-01, -3.9710e-01]],
        [[-1.9448e-01, -1.7198e-01, -1.3765e-01,  1.3688e-01, -3.9559e-01]],
        [[-3.3677e-01,  1.3965e-01, -4.3636e-01,  2.1804e-01, -3.3938e-01]],
        [[ 4.1383e-01, -3.2703e-01, -4.3176e-02, -1.7361e-01,  3.3480e-01]],
        [[ 4.5252e-01, -1.9154e-01, -4.3298e-02,  3.0962e-01, -1.1270e-01]],
        [[ 1.1082e-01, -2.2350e-01,  2.1835e-01, -3.1955e-01,  1.6168e-02]],
        [[ 2.7081e-01, -1.8546e-01, -3.8254e-01,  9.5735e-02,  2.8437e-01]],
        [[ 4.7618e-01,  4.1306e-01,  8.9177e-02,  3.8023e-01,  3.4306e-01]],
        [[ 3.3943e-01,  2.3654e-02,  5.1846e-01, -5.9250e-02, -4.5219e-01]],
        [[ 4.7741e-02, -1.3838e-01, -5.3054e-02,  1.7674e-01,  1.0100e-01]],
        [[-1.0009e-01, -2.2408e-01, -3.4868e-01, -3.2703e-01,  2.1457e-01]],
        [[-2.1881e-01, -6.3516e-01,  1.2710e-01,  2.8336e-01,  2.5406e-01]],
        [[-2.4988e-02,  1.9098e-02,  1.0663e-01, -3.3569e-01,  1.2058e-02]],
        [[-2.4313e-01,  4.7830e-01, -2.4853e-01,  2.1232e-01,  4.7025e-01]],
        [[-2.9979e-01,  2.8048e-01, -1.3527e-01, -5.0742e-01, -2.6945e-01]],
        [[-5.4324e-02,  3.8939e-01,  3.6261e-01, -2.5221e-01, -3.2833e-02]]],
       dtype=torch.float64, requires_grad=True)
conv1.bias Parameter containing:
tensor([ 0.0068,  0.3959,  0.3953,  0.0229,  0.0552,  0.3996, -0.3502, -0.0875,
        -0.2537,  0.3266,  0.0748,  0.2942,  0.1539,  0.1286,  0.3247, -0.2999,
        -0.0301, -0.2183, -0.3197,  0.3059, -0.3824, -0.0218,  0.1027, -0.3561,
        -0.2119,  0.3064, -0.2915,  0.1883, -0.1988, -0.3409, -0.0301, -0.2166],
       dtype=torch.float64, requires_grad=True)
conv2.weight Parameter containing:
tensor([[[-0.0206, -0.0952, -0.0080],
         [ 0.0363,  0.0759,  0.0800],
         [-0.0833, -0.0193, -0.0112],
         ...,
         [ 0.0572, -0.0110,  0.0775],
         [-0.0673,  0.1004,  0.0040],
         [-0.0135, -0.0313,  0.0963]],
        [[-0.0383, -0.0312,  0.0857],
         [-0.0847, -0.0771,  0.0096],
         [ 0.0707, -0.0727, -0.0746],
         ...,
         [ 0.0969,  0.0359, -0.0485],
         [-0.0083, -0.0138,  0.0131],
         [ 0.0307, -0.0160, -0.0160]],
        [[ 0.1186, -0.0574,  0.0178],
         [ 0.0922,  0.0824,  0.0126],
         [-0.0416, -0.0692,  0.0162],
         ...,
         [-0.0304,  0.0931,  0.1863],
         [-0.0886, -0.0185,  0.0766],
         [ 0.0891,  0.1193, -0.0166]],
        ...,
        [[-0.0817, -0.0124,  0.0342],
         [ 0.0420, -0.0424,  0.0645],
         [-0.0905,  0.0552,  0.0643],
         ...,
         [-0.0623,  0.0513, -0.0606],
         [ 0.0305, -0.0100,  0.0882],
         [-0.0629, -0.0805,  0.0784]],
        [[ 0.0872,  0.0286, -0.0537],
         [-0.0557,  0.0919, -0.0402],
         [ 0.0517, -0.0768, -0.0614],
         ...,
         [ 0.0245,  0.0096, -0.0715],
         [-0.0854,  0.0182,  0.0069],
         [-0.0749, -0.0112,  0.0048]],
        [[ 0.0156,  0.0373,  0.0182],
         [-0.0868,  0.0469, -0.0632],
         [-0.0108,  0.0218,  0.0782],
         ...,
         [ 0.1401, -0.1966,  0.0885],
         [-0.0172, -0.0378,  0.0175],
         [ 0.0157,  0.1584, -0.0054]]], dtype=torch.float64,
       requires_grad=True)
conv2.bias Parameter containing:
tensor([-0.0763, -0.0457, -0.0216, -0.0296,  0.0279, -0.0197,  0.0224, -0.0979,
         0.0708, -0.0530, -0.0545,  0.0302, -0.0705,  0.0475, -0.0746,  0.0137],
       dtype=torch.float64, requires_grad=True)
qlayer.weights Parameter containing:
tensor([[-0.5297, -0.5809,  0.3629, -0.2268, -0.2238],
        [-0.0574,  1.0602, -1.7817, -0.9681, -0.5763],
        [-0.6387,  1.1633, -0.7941,  0.2476,  0.6859],
        [-2.0219, -2.0591, -0.7811, -1.9829,  0.6555],
        [ 0.6729, -1.0345, -0.4255, -0.8693, -0.7379]], dtype=torch.float64,
       requires_grad=True)
fc3.weight Parameter containing:
tensor([[ 0.4251, -0.4292,  0.6884,  0.3252,  0.3439]], dtype=torch.float64,
       requires_grad=True)
fc3.bias Parameter containing:
tensor([0.2829], dtype=torch.float64, requires_grad=True)

Then I feed the data which cause Nan value to be predicted to check the outputs of each layer:

tensor([[[0.0181, 0.0105, 0.0349, 0.0064, 0.0075, 0.0000, 0.0579, 0.0461,
          0.0000, 0.0374, 0.0120, 0.0000, 0.0000, 0.0473, 0.0116, 0.0000,
          0.0228, 0.0063, 0.0062, 0.0024, 0.0000, 0.0702, 0.0410, 0.0000],
         [0.4610, 0.3538, 0.3568, 0.3961, 0.3952, 0.4988, 0.5515, 0.5684,
          0.4496, 0.3599, 0.3797, 0.4116, 0.4705, 0.4790, 0.4482, 0.3826,
          0.3720, 0.3965, 0.3980, 0.4050, 0.5014, 0.5314, 0.5208, 0.4482],
         [0.3820, 0.3758, 0.4250, 0.3944, 0.3959, 0.3451, 0.3811, 0.4173,
          0.3369, 0.4190, 0.3963, 0.3895, 0.3734, 0.4026, 0.3925, 0.3750,
          0.4124, 0.3950, 0.3947, 0.3922, 0.3484, 0.3917, 0.4188, 0.3174],
         [0.0000, 0.0000, 0.0000, 0.0227, 0.0220, 0.0000, 0.1323, 0.0219,
          0.0510, 0.0000, 0.0058, 0.0000, 0.0202, 0.0744, 0.0144, 0.0142,
          0.0000, 0.0224, 0.0228, 0.0194, 0.0000, 0.1354, 0.0114, 0.0000],
         [0.0105, 0.0000, 0.0000, 0.0534, 0.0528, 0.0000, 0.0000, 0.1397,
          0.0027, 0.0000, 0.0042, 0.0212, 0.0155, 0.0536, 0.0868, 0.0000,
          0.0000, 0.0549, 0.0541, 0.0514, 0.0000, 0.0000, 0.1459, 0.0000],
         [0.5608, 0.5906, 0.5086, 0.4021, 0.4023, 0.4533, 0.4927, 0.4424,
          0.5124, 0.5720, 0.4566, 0.4343, 0.4411, 0.4362, 0.4276, 0.4757,
          0.4619, 0.3999, 0.4008, 0.4041, 0.4551, 0.4802, 0.4273, 0.5235],
         [0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
          0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
          0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
         [0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
          0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
          0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
         [0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
          0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
          0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
         [0.3957, 0.4851, 0.4070, 0.3284, 0.3284, 0.2999, 0.2619, 0.2240,
          0.3894, 0.4490, 0.3730, 0.3369, 0.3014, 0.2823, 0.3132, 0.3938,
          0.3735, 0.3264, 0.3259, 0.3236, 0.2975, 0.2671, 0.2517, 0.4024],
         [0.0598, 0.0754, 0.0986, 0.0744, 0.0753, 0.0651, 0.1070, 0.0788,
          0.0453, 0.0957, 0.0788, 0.0774, 0.0788, 0.0890, 0.0648, 0.0675,
          0.0886, 0.0747, 0.0749, 0.0745, 0.0678, 0.1059, 0.0723, 0.0411],
         [0.2051, 0.3067, 0.3517, 0.2931, 0.2952, 0.1930, 0.1889, 0.1589,
          0.1892, 0.3383, 0.3072, 0.2826, 0.2321, 0.2382, 0.2363, 0.2843,
          0.3286, 0.2936, 0.2925, 0.2860, 0.1934, 0.2061, 0.1915, 0.1857],
         [0.2275, 0.1527, 0.1643, 0.1538, 0.1542, 0.1033, 0.2446, 0.2842,
          0.1855, 0.1769, 0.1546, 0.1437, 0.1564, 0.2254, 0.1970, 0.1493,
          0.1588, 0.1535, 0.1539, 0.1525, 0.1148, 0.2536, 0.2641, 0.1302],
         [0.1725, 0.1854, 0.1006, 0.1303, 0.1281, 0.1851, 0.0400, 0.0639,
          0.2156, 0.1197, 0.1360, 0.1363, 0.1287, 0.0724, 0.1373, 0.1709,
          0.1127, 0.1290, 0.1287, 0.1307, 0.1746, 0.0351, 0.0848, 0.2529],
         [0.2987, 0.2614, 0.3248, 0.3234, 0.3243, 0.3587, 0.1805, 0.3363,
          0.2358, 0.3091, 0.3122, 0.3327, 0.3034, 0.2692, 0.3285, 0.2895,
          0.3247, 0.3249, 0.3242, 0.3253, 0.3464, 0.1873, 0.3507, 0.2695],
         [0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
          0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
          0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
         [0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0718, 0.0000,
          0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0227, 0.0000, 0.0000,
          0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0809, 0.0000, 0.0000],
         [0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
          0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
          0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
         [0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
          0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
          0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
         [0.2803, 0.3317, 0.4009, 0.3049, 0.3077, 0.4078, 0.2830, 0.2933,
          0.1944, 0.4003, 0.3274, 0.3494, 0.3381, 0.2796, 0.2768, 0.2918,
          0.3610, 0.3065, 0.3068, 0.3118, 0.3963, 0.2666, 0.2833, 0.2627],
         [0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
          0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
          0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
         [0.0000, 0.0000, 0.0044, 0.0000, 0.0000, 0.0000, 0.0000, 0.0168,
          0.0000, 0.0022, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
          0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0253, 0.0000],
         [0.0117, 0.1268, 0.1632, 0.1021, 0.1039, 0.1890, 0.1571, 0.0086,
          0.0167, 0.1503, 0.1181, 0.1358, 0.1474, 0.0881, 0.0384, 0.0973,
          0.1388, 0.1032, 0.1040, 0.1080, 0.1832, 0.1315, 0.0000, 0.0792],
         [0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
          0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
          0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
         [0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
          0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
          0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
         [0.2680, 0.2870, 0.3171, 0.3058, 0.3066, 0.3371, 0.3690, 0.3109,
          0.2680, 0.3060, 0.3041, 0.3159, 0.3325, 0.3262, 0.2894, 0.2913,
          0.3127, 0.3066, 0.3071, 0.3090, 0.3380, 0.3583, 0.2948, 0.2795],
         [0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
          0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
          0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
         [0.0288, 0.0000, 0.1376, 0.1850, 0.1869, 0.2653, 0.2967, 0.2641,
          0.0201, 0.0710, 0.1386, 0.1961, 0.2424, 0.2351, 0.1599, 0.0880,
          0.1591, 0.1888, 0.1898, 0.1947, 0.2651, 0.2766, 0.2205, 0.0412],
         [0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
          0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
          0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
         [0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
          0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
          0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
         [0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
          0.0190, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
          0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0039],
         [0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
          0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
          0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000]]],
       dtype=torch.float64, grad_fn=<ReluBackward0>)
tensor([[[0.0349, 0.0075, 0.0579, 0.0374, 0.0473, 0.0228, 0.0062, 0.0702],
         [0.4610, 0.4988, 0.5684, 0.4116, 0.4790, 0.3965, 0.5014, 0.5314],
         [0.4250, 0.3959, 0.4173, 0.4190, 0.4026, 0.4124, 0.3947, 0.4188],
         [0.0000, 0.0227, 0.1323, 0.0058, 0.0744, 0.0224, 0.0228, 0.1354],
         [0.0105, 0.0534, 0.1397, 0.0212, 0.0868, 0.0549, 0.0541, 0.1459],
         [0.5906, 0.4533, 0.5124, 0.5720, 0.4411, 0.4757, 0.4551, 0.5235],
         [0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
         [0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
         [0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
         [0.4851, 0.3284, 0.3894, 0.4490, 0.3132, 0.3938, 0.3259, 0.4024],
         [0.0986, 0.0753, 0.1070, 0.0957, 0.0890, 0.0886, 0.0749, 0.1059],
         [0.3517, 0.2952, 0.1892, 0.3383, 0.2382, 0.3286, 0.2925, 0.2061],
         [0.2275, 0.1542, 0.2842, 0.1769, 0.2254, 0.1588, 0.1539, 0.2641],
         [0.1854, 0.1851, 0.2156, 0.1363, 0.1373, 0.1709, 0.1746, 0.2529],
         [0.3248, 0.3587, 0.3363, 0.3327, 0.3285, 0.3249, 0.3464, 0.3507],
         [0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
         [0.0000, 0.0000, 0.0718, 0.0000, 0.0227, 0.0000, 0.0000, 0.0809],
         [0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
         [0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
         [0.4009, 0.4078, 0.2933, 0.4003, 0.3381, 0.3610, 0.3963, 0.2833],
         [0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
         [0.0044, 0.0000, 0.0168, 0.0022, 0.0000, 0.0000, 0.0000, 0.0253],
         [0.1632, 0.1890, 0.1571, 0.1503, 0.1474, 0.1388, 0.1832, 0.1315],
         [0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
         [0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
         [0.3171, 0.3371, 0.3690, 0.3159, 0.3325, 0.3127, 0.3380, 0.3583],
         [0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
         [0.1376, 0.2653, 0.2967, 0.1961, 0.2424, 0.1888, 0.2651, 0.2766],
         [0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
         [0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
         [0.0000, 0.0000, 0.0190, 0.0000, 0.0000, 0.0000, 0.0000, 0.0039],
         [0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000]]],
       dtype=torch.float64, grad_fn=<SqueezeBackward1>)
tensor([[[0., 0., 0., 0., 0., 0.],
         [0., 0., 0., 0., 0., 0.],
         [0., 0., 0., 0., 0., 0.],
         [0., 0., 0., 0., 0., 0.],
         [0., 0., 0., 0., 0., 0.],
         [0., 0., 0., 0., 0., 0.],
         [0., 0., 0., 0., 0., 0.],
         [0., 0., 0., 0., 0., 0.],
         [0., 0., 0., 0., 0., 0.],
         [0., 0., 0., 0., 0., 0.],
         [0., 0., 0., 0., 0., 0.],
         [0., 0., 0., 0., 0., 0.],
         [0., 0., 0., 0., 0., 0.],
         [0., 0., 0., 0., 0., 0.],
         [0., 0., 0., 0., 0., 0.],
         [0., 0., 0., 0., 0., 0.]]], dtype=torch.float64,
       grad_fn=<ReluBackward0>)
tensor([[[0., 0.],
         [0., 0.],
         [0., 0.],
         [0., 0.],
         [0., 0.],
         [0., 0.],
         [0., 0.],
         [0., 0.],
         [0., 0.],
         [0., 0.],
         [0., 0.],
         [0., 0.],
         [0., 0.],
         [0., 0.],
         [0., 0.],
         [0., 0.]]], dtype=torch.float64, grad_fn=<SqueezeBackward1>)
tensor([[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
         0., 0., 0., 0., 0., 0., 0., 0.]], dtype=torch.float64,
       grad_fn=<ReshapeAliasBackward0>)
tensor([[nan, nan, nan, nan, nan]], dtype=torch.float64,
       grad_fn=<ReshapeAliasBackward0>)
tensor([[nan]], dtype=torch.float64, grad_fn=<AddmmBackward0>)

We can find that after the data pass the qlayer, it outputs nan value. Please help me to figure out how to fix it. The dataset is correct because once i remove the qlayer, the model can work correctly.

The qnode I used is below:

import pennylane as qml
n_qubits = 5
dev = qml.device("default.qubit", wires=n_qubits)
@qml.qnode(dev)
def qnode(inputs, weights):
    qml.AmplitudeEmbedding(inputs, wires=range(n_qubits),pad_with=0., normalize=True)
    qml.BasicEntanglerLayers(weights, wires=range(n_qubits))
    return [qml.expval(qml.PauliZ(wires=i)) for i in range(n_qubits)]
n_layers = 5
weight_shapes = {"weights": (n_layers, n_qubits)}
init_method = torch.nn.init.normal_
# qlayer = qml.qnn.TorchLayer(qnode, weight_shapes,init_method )

Name: PennyLane
Version: 0.33.0
Summary: PennyLane is a Python quantum machine learning library by Xanadu Inc.
Home-page: GitHub - PennyLaneAI/pennylane: PennyLane is a cross-platform Python library for differentiable programming of quantum computers. Train a quantum computer the same way as a neural network.
Author:
Author-email:
License: Apache License 2.0
Location: c:\users\admin\anaconda3_new\lib\site-packages
Requires: appdirs, autograd, autoray, cachetools, networkx, numpy, pennylane-lightning, requests, rustworkx, scipy, semantic-version, toml, typing-extensions
Required-by: PennyLane-Lightning, PennyLane-qiskit
Platform info: Windows-10-10.0.19045-SP0
Python version: 3.9.13
Numpy version: 1.23.5
Scipy version: 1.10.0
Installed devices:

  • default.gaussian (PennyLane-0.33.0)
  • default.mixed (PennyLane-0.33.0)
  • default.qubit (PennyLane-0.33.0)
  • default.qubit.autograd (PennyLane-0.33.0)
  • default.qubit.jax (PennyLane-0.33.0)
  • default.qubit.legacy (PennyLane-0.33.0)
  • default.qubit.tf (PennyLane-0.33.0)
  • default.qubit.torch (PennyLane-0.33.0)
  • default.qutrit (PennyLane-0.33.0)
  • null.qubit (PennyLane-0.33.0)
  • lightning.qubit (PennyLane-Lightning-0.33.0)
  • qiskit.aer (PennyLane-qiskit-0.33.0)
  • qiskit.basicaer (PennyLane-qiskit-0.33.0)
  • qiskit.ibmq (PennyLane-qiskit-0.33.0)
  • qiskit.ibmq.circuit_runner (PennyLane-qiskit-0.33.0)
  • qiskit.ibmq.sampler (PennyLane-qiskit-0.33.0)
  • qiskit.remote (PennyLane-qiskit-0.33.0)

Hello @Zihao_Wang, I tried running your code but am having issues with training. The only thing I could find different from the demo format is self.qlayer = qml.qnn.TorchLayer(qnode, weight_shapes)

Hey @Zihao_Wang! Welcome to the forum :rocket:

I tried running your code example but I’m missing what qnode is when you create a TorchLayer:

        self.qlayer = qml.qnn.TorchLayer(qnode, weight_shapes,init_method)

I think trainloader and testloader are also missing. If you could provide the full code to me, I’ll run your example and let you know what I find :grin:

1 Like