Dimension reduction and combination with PyTorch

Hi, I’m making a code which deals with a dataset of 32 features and main goal of it is to work as a binary classifier with quantum operations in it. So I implemented the class using PyTorch interfaces, and here’s the a definition of a class derived from torch.nn.Module

class HybridNNV1(torch.nn.Module):
def __init__(self, input_dim, output_dim, weight=None):
    super(HybridNNV1, self).__init__()
    num_qubits = int(np.ceil(np.log2(input_dim)))
    dev = qml.device("default.qubit", wires=num_qubits)

    @qml.qnode(dev, interface='torch')
    def circuit_amp_emb(inputs):
        AmplitudeEmbedding(features=inputs, wires=[i for i in range(num_qubits)])
        return [qml.expval(qml.PauliZ(i)) for i in range(num_qubits)]

    @qml.qnode(dev)
    def circuit_emb_ent(inputs, w):
        AngleEmbedding(features=inputs, wires=[i for i in range(num_qubits)], rotation='Y')
        StronglyEntanglingLayers(weights=w, wires=[i for i in range(num_qubits)], ranges=None, imprimitive=CNOT)
        return qml.probs(wires=0)

    weight_shapes = {"w": (2, num_qubits, 3)}
    self.__circuit_amp_emb = circuit_amp_emb
    self.__qlayer = qml.qnn.TorchLayer(circuit_emb_ent, weight_shapes)
def forward(self, x):
    output = None

    a = []
    for elem in x:
        embedding_res = self.__circuit_amp_emb(elem)
        a.append(embedding_res)

    b = torch.stack(a)
    output = self.__qlayer (b)

Since I heard that amplitude embedding is non-differentiable, I used AmplitudeEmbedding template with input tensor (32-length of classical feature data), then gathered expectation value from each of 5 qubits with Pauli-Z observable. Then I stacked them and transformed them to new tensors, then used another qnode of a AngleEmbedding and StronglyEntangling Layers. Finally I measured probability measurement of qubit 0 and let this value to be fed into torch.nn.CrossEntropyLoss.

Due to the restriction of hardware resource for Simulator, I think this is the best approach I can imagine. Do you have any brilliant and nice approach to build a class derived from torch.nn.Module to successfully classify 32-feature dataset to a binary classes?

Thanks

Hey @akawarren,

It’s an interesting approach! Did you manage to get it to work?

One option could be to use classical layers to do a dimensionality reduction before feeding into a QNode of simulateable width. For example, we can use two classical layers to shrink from 32 -> 16 -> 4 dimensions and then pass through a 4-dimensional QNode. We can then use an output classical layer to go from 4 -> 2 and also (optionally) apply a softmax. I got the following to work:

import pennylane as qml
import torch
import numpy as np

wires = 4
n_strong_layers = 3

dev = qml.device("default.qubit", wires=wires)

@qml.qnode(dev)
def qnode(inputs, weights):
    qml.templates.AngleEmbedding(inputs, wires=range(wires))
    qml.templates.StronglyEntanglingLayers(weights, wires=range(wires))
    return [qml.expval(qml.PauliZ(i)) for i in range(wires)]

weight_shapes = {"weights": (n_strong_layers, wires, 3)}

qlayer = qml.qnn.TorchLayer(qnode, weight_shapes)
clayer1 = torch.nn.Linear(32, 16)
clayer2 = torch.nn.Linear(16, wires)
clayer3 = torch.nn.Linear(wires, 2)
sm = torch.nn.Softmax(dim=1)

model = torch.nn.Sequential(clayer1, clayer2, qlayer, clayer3, sm)

dat = torch.tensor(np.random.random((10, 32))).float()
model(dat)

Off the top of my head, a more ambitious idea might be to use multiple embedding layers to input the higher-dimensional data. Suppose we want to encode 32-dimensional data in a 4-qubit QNode. We could do something like:
(AngleEmbed + StronglyEntanglingLayers) * 8
Where each AngleEmbed steps through the input data with stride size 4. I also managed to get this to work:

import pennylane as qml
import torch
import numpy as np

wires = 4
input_dim = 32
n_strong_layers = int(input_dim / wires)

dev = qml.device("default.qubit", wires=wires)

@qml.qnode(dev)
def qnode(inputs, weights):
    
    inputs = np.split(inputs, n_strong_layers)
    
    for i in range(n_strong_layers):
        w = np.expand_dims(weights[i], axis=0)
        qml.templates.AngleEmbedding(inputs[i], wires=range(wires))
        qml.templates.StronglyEntanglingLayers(w, wires=range(wires))
    return [qml.expval(qml.PauliZ(i)) for i in range(2)]

weight_shapes = {"weights": (n_strong_layers, wires, 3)}

qlayer = qml.qnn.TorchLayer(qnode, weight_shapes)
softmax = torch.nn.Softmax(dim=1)
model = torch.nn.Sequential(qlayer, softmax)

dat = torch.tensor(np.random.random((10, 32)))
model(dat)

I’m not sure if this is a good idea or not though!

Thanks!
Tom

1 Like

Hi, @Tom_Bromley

I’ve tested the original code with my dataset for 30 epochs, and it seemed that it was heading for an minimization of loss. But it showed bit poor result than using Amplitude Embedding and Strongly Entangling Layers with NumPy Optimizer provided by pennylane.

Since I’m behind the company proxy, I’m sorry that I’m not able to upload the images of test results. However, what I found interesting was that when I combined the result of measurement of QNode and classical data together to make a new tensor and fed it to a another classical layer, the overall accuracy and loss surely showed improvement than using a neural network made of only classical linear layers.

Maybe I can share results of the codes you’ve shared, also, in near futrure.

Thanks for an charm advice.

1 Like