Quantum Transfer Learning

Hi ! I am actually building model in which we have 5 different classes of data and I use same which was given in Pannylane notebook. But its seems not perfectly work for me …

In this note book there is just 2 classes of data is used like : Ants and Bees

But I want to apply it on 5 classes of data : So Does i need to do some modification in my code you can check my code in link below:
Here is my code:

Hi @M_Umer_Yasin — Welcome to the forum! Certainly if you need to classify objects into more bins you will have to do at least some modifications on the tutorial.

Can you send me some thing from which I can get an idea ?

Hi @M_Umer_Yasin,

There are a number of ways you could do multi-class classification. One approach that is taken in the Multiclass margin classifier demo (not the only way) is to use multiple binary classfiers, where the binary choice to be made is “is it in class X or not in class X?”

Hello @M_Umer_Yasin, this must be set to your number of classes with the correct directory structures. Example 44 classes. self.post_net = nn.Linear(n_qubits, 44) You will get the best results if n_qubits = 2 * Classes. Github: https://github.com/kevinkawchak/Medical-Quantum-Machine-Learning/blob/main/Code/PennyLane/Quantum%20TL%20vs.%203%20Models/QTL%20FE%2044%20Class%2064.0%25%20kkawchak.ipynb

For the QTL demo, I am now receiving a gpu error using the Original notebook in Colab:

Training started:

RuntimeError Traceback (most recent call last)
in <cell line: 1>()
----> 1 model_hybrid = train_model(
2 model_hybrid, criterion, optimizer_hybrid, exp_lr_scheduler, num_epochs=num_epochs
3 )

22 frames
/usr/local/lib/python3.10/dist-packages/pennylane/math/single_dispatch.py in _coerce_types_torch(tensors)
603 # GPU specific case
604 device_names = ", “.join(str(d) for d in device_set)
→ 605 raise RuntimeError(
606 f"Expected all tensors to be on the same device, but found at least two devices, {device_names}!”
607 )

RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0, cpu!

Thank you.

Hey @kevinkawchak,

It looks like somewhere along the line you’re creating a torch tensor on a cpu device instead of a gpu device (or the other way around). You’ll need to do something like this:

device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
input_tensor = input_tensor.to(device)
output_tensor = output_tensor.to(device)

or you can set the default device like this (see here: torch.set_default_device — PyTorch 2.1 documentation)

>>> torch.tensor([1.2, 3]).device
device(type='cpu')
>>> torch.set_default_device('cuda')  # current device is 0
>>> torch.tensor([1.2, 3]).device
device(type='cuda', index=0)
>>> torch.set_default_device('cuda:1')
>>> torch.tensor([1.2, 3]).device
device(type='cuda', index=1)

Let me know if this helps!

1 Like

Hello, I tried both methods. Could you please try running the QTL model in Colab with a gpu?

Hey @kevinkawchak,

I spoke with someone internally and it looks like this issue might be due to the fact that putting the torch tensor on a gpu is problematic when lightning-gpu is being used as well. Lightning-gpu and torch’s gpu pipeline are entirely differerent, and lightning-gpu expects the data to be on the host right now, so that should fix it!