Quantum Transfer Learning Circuit Visualization

Hello! If applicable, put your complete code example down below. Make sure that your code:

  • is 100% self-contained — someone can copy-paste exactly what is here and run it to
    reproduce the behaviour you are observing
  • includes comments
    Greetings,
    I have been trying to visualize the circuit in Quantum Transfer Learning example at:
    Quantum transfer learning

Would highly appreciate any help on how to visualize this circuit (or) pointers to where this visualization may already be available.

Thanks

# Put code here
def H_layer(nqubits):
    """Layer of single-qubit Hadamard gates.
    """
    for idx in range(nqubits):
        qml.Hadamard(wires=idx)


def RY_layer(w):
    """Layer of parametrized qubit rotations around the y axis.
    """
    for idx, element in enumerate(w):
        qml.RY(element, wires=idx)


def entangling_layer(nqubits):
    """Layer of CNOTs followed by another shifted layer of CNOT.
    """
    # In other words it should apply something like :
    # CNOT  CNOT  CNOT  CNOT...  CNOT
    #   CNOT  CNOT  CNOT...  CNOT
    for i in range(0, nqubits - 1, 2):  # Loop over even indices: i=0,2,...N-2
        qml.CNOT(wires=[i, i + 1])
    for i in range(1, nqubits - 1, 2):  # Loop over odd indices:  i=1,3,...N-3
        qml.CNOT(wires=[i, i + 1])

@qml.qnode(dev, interface="torch")
def quantum_net(q_input_features, q_weights_flat):
    """
    The variational quantum circuit.
    """

    # Reshape weights
    q_weights = q_weights_flat.reshape(q_depth, n_qubits)

    # Start from state |+> , unbiased w.r.t. |0> and |1>
    H_layer(n_qubits)

    # Embed features in the quantum node
    RY_layer(q_input_features)

    # Sequence of trainable variational layers
    for k in range(q_depth):
        entangling_layer(n_qubits)
        RY_layer(q_weights[k])

    # Expectation values in the Z basis
    exp_vals = [qml.expval(qml.PauliZ(position)) for position in range(n_qubits)]
    return tuple(exp_vals)

If you want help with diagnosing an error, please put the full error message below:

# Put full error message here

And, finally, make sure to include the versions of your packages. Specifically, show us the output of qml.about().
Name: PennyLane
Version: 0.30.0
Summary: PennyLane is a Python quantum machine learning library by Xanadu Inc.
Home-page: GitHub - PennyLaneAI/pennylane: PennyLane is a cross-platform Python library for differentiable programming of quantum computers. Train a quantum computer the same way as a neural network.
Author:
Author-email:
License: Apache License 2.0
Location: /usr/local/lib/python3.10/dist-packages
Requires: appdirs, autograd, autoray, cachetools, networkx, numpy, pennylane-lightning, requests, rustworkx, scipy, semantic-version, toml
Required-by: PennyLane-Lightning

Platform info: Linux-5.15.107±x86_64-with-glibc2.31
Python version: 3.10.12
Numpy version: 1.22.4
Scipy version: 1.10.1
Installed devices:

fig, ax = qml.draw_mpl(circuit, expansion_strategy='device')(your parameters)

OR

print(qml.draw(circuit, expansion_strategy='device')(your parameters))
2 Likes

Hey @mas0047! Welcome to the forum :rocket:

Thanks @wing_chen for the answer — those two options work just fine!

1 Like

Many thanks!

I have tried these solutions. What I was not able to resolve is: 1. the ‘parameters’ parameter 2. Exactly where to make this call.
The example is based on ResNet 18 pretrained model. Further in the code:

class DressedQuantumNet(nn.Module):
    """
    Torch module implementing the *dressed* quantum net.
    """

    def __init__(self):
        """
        Definition of the *dressed* layout.
        """

        super().__init__()
        self.pre_net = nn.Linear(512, n_qubits)
        self.q_params = nn.Parameter(q_delta * torch.randn(q_depth * n_qubits))
        self.post_net = nn.Linear(n_qubits, 2)

    def forward(self, input_features):
        """
        Defining how tensors are supposed to move through the *dressed* quantum
        net.
        """

        # obtain the input features for the quantum circuit
        # by reducing the feature dimension from 512 to 4
        pre_out = self.pre_net(input_features)
        q_in = torch.tanh(pre_out) * np.pi / 2.0

        # Apply the quantum circuit to each element of the batch and append to q_out
        q_out = torch.Tensor(0, n_qubits)
        q_out = q_out.to(device)
        for elem in q_in:
            q_out_elem = torch.hstack(quantum_net(elem, self.q_params)).float().unsqueeze(0)
            q_out = torch.cat((q_out, q_out_elem))

        # return the two-dimensional prediction from the postprocessing layer
        return self.post_net(q_out)

model_hybrid = torchvision.models.resnet18(pretrained=True)

for param in model_hybrid.parameters():
    param.requires_grad = False


# Notice that model_hybrid.fc is the last layer of ResNet18
model_hybrid.fc = DressedQuantumNet()

# Use CUDA or CPU according to the "device" object.
model_hybrid = model_hybrid.to(device)

criterion = nn.CrossEntropyLoss()

optimizer_hybrid = optim.Adam(model_hybrid.fc.parameters(), lr=step)

exp_lr_scheduler = lr_scheduler.StepLR(
    optimizer_hybrid, step_size=10, gamma=gamma_lr_scheduler
)

def train_model(model, criterion, optimizer, scheduler, num_epochs):
    since = time.time()
    best_model_wts = copy.deepcopy(model.state_dict())
    best_acc = 0.0
    best_loss = 10000.0  # Large arbitrary number
    best_acc_train = 0.0
    best_loss_train = 10000.0  # Large arbitrary number
    print("Training started:")

    for epoch in range(num_epochs):

        # Each epoch has a training and validation phase
        for phase in ["train", "validation"]:
            if phase == "train":
                # Set model to training mode
                model.train()
            else:
                # Set model to evaluate mode
                model.eval()
            running_loss = 0.0
            running_corrects = 0

            # Iterate over data.
            n_batches = dataset_sizes[phase] // batch_size
            it = 0
            for inputs, labels in dataloaders[phase]:
                since_batch = time.time()
                batch_size_ = len(inputs)
                inputs = inputs.to(device)
                labels = labels.to(device)
                optimizer.zero_grad()

                # Track/compute gradient and make an optimization step only when training
                with torch.set_grad_enabled(phase == "train"):
                    outputs = model(inputs)
                    _, preds = torch.max(outputs, 1)
                    loss = criterion(outputs, labels)
                    if phase == "train":
                        loss.backward()
                        optimizer.step()

                # Print iteration results
                running_loss += loss.item() * batch_size_
                batch_corrects = torch.sum(preds == labels.data).item()
                running_corrects += batch_corrects
                print(
                    "Phase: {} Epoch: {}/{} Iter: {}/{} Batch time: {:.4f}".format(
                        phase,
                        epoch + 1,
                        num_epochs,
                        it + 1,
                        n_batches + 1,
                        time.time() - since_batch,
                    ),
                    end="\r",
                    flush=True,
                )
                it += 1

            # Print epoch results
            epoch_loss = running_loss / dataset_sizes[phase]
            epoch_acc = running_corrects / dataset_sizes[phase]
            print(
                "Phase: {} Epoch: {}/{} Loss: {:.4f} Acc: {:.4f}        ".format(
                    "train" if phase == "train" else "validation  ",
                    epoch + 1,
                    num_epochs,
                    epoch_loss,
                    epoch_acc,
                )
            )

            # Check if this is the best model wrt previous epochs
            if phase == "validation" and epoch_acc > best_acc:
                best_acc = epoch_acc
                best_model_wts = copy.deepcopy(model.state_dict())
            if phase == "validation" and epoch_loss < best_loss:
                best_loss = epoch_loss
            if phase == "train" and epoch_acc > best_acc_train:
                best_acc_train = epoch_acc
            if phase == "train" and epoch_loss < best_loss_train:
                best_loss_train = epoch_loss

            # Update learning rate
            if phase == "train":
                scheduler.step()

    # Print final results
    model.load_state_dict(best_model_wts)
    time_elapsed = time.time() - since
    print(
        "Training completed in {:.0f}m {:.0f}s".format(time_elapsed // 60, time_elapsed % 60)
    )
    print("Best test loss: {:.4f} | Best test accuracy: {:.4f}".format(best_loss, best_acc))
    return model

model_hybrid = train_model(
    model_hybrid, criterion, optimizer_hybrid, exp_lr_scheduler, num_epochs=num_epochs
)

DressedQuantumNet seems to be setting up the input parameters:

for elem in q_in:
            q_out_elem = torch.hstack(quantum_net(elem, self.q_params)).float().unsqueeze(0)
            q_out = torch.cat((q_out, q_out_elem))

Follow up Questions: 1. At what point in the code should the circuit draw call be made (and) 2. what should I use as input parameters?

These parameters seem to be set upon runtime based upon Resnet 18 but I may be mistaken.

Many thanks!

One option where inp_arr = torch.tensor(np.pi * np.random.randn(n_qubits)):

Hey @mas0047,

What I was not able to resolve is: 1. the ‘parameters’ parameter 2. Exactly where to make this call.

I’m not sure I understand :thinking:… If you want to draw your quantum circuit, using qml.draw should suffice. Check the documentation for more details: qml.draw — PennyLane 0.33.0 documentation

That said, it looks like you’re creating a hybrid model with torch. I highly recommend using qml.qnn.TorchLayer — PennyLane has built-in interfacing with all of the big machine learning libraries :slight_smile:. Check out this demo: Turning quantum nodes into Torch Layers | PennyLane Demos

  1. At what point in the code should the circuit draw call be made (and) 2. what should I use as input parameters?

You can draw the circuit anytime you’d like! For input parameters, this isn’t a crucial choice to make. If you just want to look at your circuit to see the gates, then any random parameters will do the trick.

Please note that if you try to visualize/draw an instance of qml.qnn.TorchLayer, this will be possible in v0.31 — we are currently on v0.30 :slight_smile:. There was a PR made for adding this feature: Allow circuit drawing with `KerasLayer` and `TorchLayer` by eddddddy · Pull Request #4197 · PennyLaneAI/pennylane · GitHub

1 Like


Many thanks for your help!
Please find attached a visualization of the circuit in question. It is based upon:

idx)


def RY_layer(w):
    """Layer of parametrized qubit rotations around the y axis.
    """
    for idx, element in enumerate(w):
        qml.RY(element, wires=idx)


def entangling_layer(nqubits):
    """Layer of CNOTs followed by another shifted layer of CNOT.
    """
    # In other words it should apply something like :
    # CNOT  CNOT  CNOT  CNOT...  CNOT
    #   CNOT  CNOT  CNOT...  CNOT
    for i in range(0, nqubits - 1, 2):  # Loop over even indices: i=0,2,...N-2
        qml.CNOT(wires=[i, i + 1])
    for i in range(1, nqubits - 1, 2):  # Loop over odd indices:  i=1,3,...N-3
        qml.CNOT(wires=[i, i + 1])

I understand the initial H and RY layers. A question:
There are six (6) more RYs for each qubit - in addition to the first RY layer. May I please find out why there are additional 6 RYs in each qubit?
Thanks!

Can you show me the exact code you’re using to generate that plot? My first guess is that somewhere you’re looping in extra RY_layer calls.

Many thanks!

Code where circuit draw is being called: In DressedQuantumCircuit:

class DressedQuantumNet(nn.Module):
    """
    Torch module implementing the *dressed* quantum net.
    """

    def __init__(self):
        """
        Definition of the *dressed* layout.
        """

        super().__init__()
        self.pre_net = nn.Linear(512, n_qubits)
        self.q_params = nn.Parameter(q_delta * torch.randn(q_depth * n_qubits))
        self.post_net = nn.Linear(n_qubits, 2)

    def forward(self, input_features):
        """
        Defining how tensors are supposed to move through the *dressed* quantum
        net.
        """

        # obtain the input features for the quantum circuit
        # by reducing the feature dimension from 512 to 4
        pre_out = self.pre_net(input_features)
        q_in = torch.tanh(pre_out) * np.pi / 2.0

        # Apply the quantum circuit to each element of the batch and append to q_out
        q_out = torch.Tensor(0, n_qubits)
        q_out = q_out.to(device)
        fig, ax = qml.draw_mpl(quantum_net, expansion_strategy='device')(q_in, self.q_params)
        plt.show()
        fig.show()
        for elem in q_in:
            q_out_elem = torch.hstack(quantum_net(elem, self.q_params)).float().unsqueeze(0)
            q_out = torch.cat((q_out, q_out_elem))
            fig, ax = qml.draw_mpl(quantum_net, expansion_strategy='device')(elem, self.q_params)
            plt.show()
            fig.show()

        # return the two-dimensional prediction from the postprocessing layer
        return self.post_net(q_out)

Lines that create the diagram:
fig, ax = qml.draw_mpl(quantum_net, expansion_strategy=‘device’)(q_in, self.q_params)
plt.show()
fig.show()

Would highly appreciate your feedback!

Another question:

I have tried running this code from a jupyter notebook on Google cloud connecting to IBM Quantum Computing through an access token but it took a very long time to run a single epoch. Had to disconnect without completing the first epoch.

May I find out if there is a process to run Pennylane code on Jupyter Notebook on either IBM or another actual Quantum environment?

Thanks!

I must warn you that the quantum transfer learning example is a bit misleading. Why? If you change the feature extractor to ResNet152 as you did (or any other extractor like EffeicientNetb-1), which is a bit better than the original resnet18, then irrespective of the circuit you choose, the training process will be highly successful. The quantum circuit in this experiment has almost no effect on the end result, its the other classical layers that do the learning. Just change the original quantum circuit to something simpler and test it. This whole specific experiment does not prove supremacy.

Many thanks!
I have tried another couple of models but not 152. Will give it a try as well.
Side question: I have been running this example as a Jupyter Notebook on Google Colab and GPUs. Tried connecting this notebook to IBM Quantum environment using an access token but wasn’t successful. Exact problem: Epoch 1 was taking a very long time so I had to disconnect.
Would you be able to recommend a process/ procedure (or how) to run this/ similar example(s) in an actual quantum environment? Would highly appreciate steps to follow.

Thanks again!

Hey @mas0047,

I strongly recommend against putting circuit drawing calls in your forward function. Circuit drawing in PennyLane is really just meant to visualize the circuit, not look at the parameters in the circuit. If you want to see the circuit parameters, I recommend just printing them off separately from circuit drawing. That, and this is going to slow down your forward passes, which is a large problem!

It could be that the frequency in which you’re drawing your circuit and not flushing out your plot might be adding extra parts to your circuit drawing that aren’t actually “there”. You can try adding

plt.close()
plt.cla()
plt.clf()

There are still some variables missing in the code you provided here, so I can’t run your code to replicate the behaviour (e.g., missing q_depth, n_qubits, for example). It’s likely that your circuit drawing is accurate (i.e., your code is purposefully putting those extra gates in). Another way you can check this is by looking at the operations in your circuit. Here’s an example of what I mean:

dev = qml.device("default.qubit", wires=2)

@qml.qnode(dev)
def circuit():
    qml.Hadamard(0)
    qml.RX(0.1, 1)
    return qml.state()

circuit()
tape = circuit.tape
operations = tape.operations

for op in operations:
    print(op)

'''
Hadamard(wires=[0])
RX(0.1, wires=[1])
'''

Hope this helps!

1 Like