I’m encountering a runtime error while trying to integrate PennyLane with a PyTorch model for a hybrid quantum-classical neural network project. The error occurs during the forward pass of my model, specifically when executing a quantum circuit defined with PennyLane and integrating its output with PyTorch tensors.
Here’s the error message:
vbnetCopy code
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu, cuda:0!
This error arises when I attempt to pass tensors from my PyTorch model, which is on a GPU, to a PennyLane quantum node, which operates on the CPU. I’ve made sure to move all relevant tensors to the CPU before executing the quantum node and back to the GPU afterwards, yet the error persists.
Below is a simplified version of the code that leads to this error:
pythonCopy code
import torch
import pennylane as qml
from torch import nn
from pennylane import numpy as np
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
#%% md
## Load Dataset
There are two columns in the dataset: label and text. The column "text" holds the message content, whereas "label" is a binary variable where 1 indicates that the message is spam and 0 indicates that it is not spam.
#%%
df = pd.read_csv("data/spamdata_v2.csv")
df.head()
#%%
df.shape
#%%
# check class distribution
df['label'].value_counts(normalize = True)
#%%
#%% md
This dataset will now be divided into three sets: train, validation, and test.
We will fine-tune the model using the train set and the validation set, and make predictions for the test set.
#%%
train_text, temp_text, train_labels, temp_labels = train_test_split(df['text'], df['label'],
random_state=2018,
test_size=0.3,
stratify=df['label'])
# we will use temp_text and temp_labels to create validation and test set
val_text, test_text, val_labels, test_labels = train_test_split(temp_text, temp_labels,
random_state=2018,
test_size=0.5,
stratify=temp_labels)
#%% md
## Import BERT Model and BERT Tokenizer
#%%
# import BERT-base pretrained model
bert = AutoModel.from_pretrained('bert-base-uncased')
# Load the BERT tokenizer
tokenizer = BertTokenizerFast.from_pretrained('bert-base-uncased')
#%% md
## Tokenize the Sentences
Since the messages (text) in the dataset are of variable lengths, we will employ padding to ensure that the length of each message is the same. We can pad messages using the maximum sequence length. To determine the correct padding length, we may also examine the distribution of sequence lengths in the train set.
#%%
# get length of all the messages in the train set
seq_len = [len(i.split()) for i in train_text]
pd.Series(seq_len).hist(bins = 30)
#%% md
It is evident that the majority of the texts include little more than 25 words. In contrast to the maximum length of 175, If we choose 175 as the padding length, then all input sequences will have a length of 175 and the majority of tokens in those sequences will be padding tokens, which will not help the model learn anything useful and will also slow down training.
We will thus set the padding length to 25.
#%%
max_seq_len = 25
#%%
# tokenize and encode sequences in the training set
tokens_train = tokenizer.batch_encode_plus(
train_text.tolist(),
max_length = max_seq_len,
pad_to_max_length=True,
truncation=True,
return_token_type_ids=False
)
# tokenize and encode sequences in the validation set
tokens_val = tokenizer.batch_encode_plus(
val_text.tolist(),
max_length = max_seq_len,
pad_to_max_length=True,
truncation=True,
return_token_type_ids=False
)
# tokenize and encode sequences in the test set
tokens_test = tokenizer.batch_encode_plus(
test_text.tolist(),
max_length = max_seq_len,
pad_to_max_length=True,
truncation=True,
return_token_type_ids=False
)
#%% md
## convert the integer sequences to tensors.
#%%
# for train set
train_seq = torch.tensor(tokens_train['input_ids'])
train_mask = torch.tensor(tokens_train['attention_mask'])
train_y = torch.tensor(train_labels.tolist())
# for validation set
val_seq = torch.tensor(tokens_val['input_ids'])
val_mask = torch.tensor(tokens_val['attention_mask'])
val_y = torch.tensor(val_labels.tolist())
# for test set
test_seq = torch.tensor(tokens_test['input_ids'])
test_mask = torch.tensor(tokens_test['attention_mask'])
test_y = torch.tensor(test_labels.tolist())
#%% md
## Create DataLoaders
Now, dataloaders will be created for both the train and validation sets. During the training phase, these dataloaders will send batches of train data and validation data to the model as input.
#%%
from torch.utils.data import TensorDataset, DataLoader, RandomSampler, SequentialSampler
#define a batch size
batch_size = 32
# wrap tensors
train_data = TensorDataset(train_seq, train_mask, train_y)
# sampler for sampling the data during training
train_sampler = RandomSampler(train_data)
# dataLoader for train set
train_dataloader = DataLoader(train_data, sampler=train_sampler, batch_size=batch_size)
# wrap tensors
val_data = TensorDataset(val_seq, val_mask, val_y)
# sampler for sampling the data during training
val_sampler = SequentialSampler(val_data)
# dataLoader for validation set
val_dataloader = DataLoader(val_data, sampler = val_sampler, batch_size=batch_size)
#%% md
## Freeze BERT Parameters
We freeze all the layers of the BERT model and attach a few neural network layers of our own and train this new model. Note that the weights of only the attached layers will be updated during model training.
#%%
# freeze all the parameters
for param in bert.parameters():
param.requires_grad = False
#%% md
## Define Model Architecture Hybrid transfer learning model (classical-to-quantum).
Setting of the main parameters of the network model and of the training process.
#%%
n_qubits = 5 # Number of qubits
q_depth = 4 # Depth of the quantum circuit (number of variational layers)
max_layers = 15 # Keep 15 even if not all are used.
q_delta = 0.01 # Initial spread of random quantum weights
#%% md
Let us initialize a PennyLane device with the default simulator.
#%%
dev = qml.device('default.qubit', wires=n_qubits)
class Quantumnet(nn.Module):
def __init__(self, bert):
super(Quantumnet, self).__init__()
self.bert = bert
self.pre_net = nn.Linear(768, n_qubits)
self.q_params = nn.Parameter(q_delta * torch.randn(max_layers * n_qubits))
self.post_net = nn.Linear(n_qubits, 2)
def forward(self, sent_id, mask):
_, cls_hs = self.bert(sent_id, attention_mask=mask, return_dict=False)
pre_out = self.pre_net(cls_hs)
q_in = torch.tanh(pre_out) * np.pi / 2.0
# Apply the quantum circuit to each element of the batch and append to q_out
q_out = torch.Tensor(0, n_qubits)
q_out = q_out.to(device)
for elem in q_in:
q_out_elem = q_net(elem,self.q_params).float().unsqueeze(0)
q_out = torch.cat((q_out, q_out_elem))
return self.post_net(q_out)
dev = qml.device('default.qubit', wires=n_qubits)
@qml.qnode(dev, interface='torch')
def q_net(q_in, q_weights_flat):
# Reshape weights
q_weights = q_weights_flat.reshape(max_layers, n_qubits)
# Start from state |+> , unbiased w.r.t. |0> and |1>
H_layer(n_qubits)
# Embed features in the quantum node
RY_layer(q_in)
# Sequence of trainable variational layers
for k in range(q_depth):
entangling_layer(n_qubits)
RY_layer(q_weights[k + 1])
# Expectation values in the Z basis
return [qml.expval(qml.PauliZ(j)) for j in range(n_qubits)]
model = Quantumnet(bert, n_qubits, max_layers, q_delta).to(device)
# push the model to GPU
model = model.to(device)
from transformers import AdamW
# define the optimizer
optimizer = AdamW(model.parameters(), lr = 1e-3)
#%% md
#%% md
## Find Class Weights
Our dataset has an imbalance between classes. The vast majority of messages are not spam. Therefore, we will first calculate class weights for the labels in the train set and then send these weights to the loss function so that the class imbalance is taken care of.
#%%
from sklearn.utils.class_weight import compute_class_weight
#compute the class weights
class_wts = compute_class_weight(class_weight = "balanced",
classes = np.unique(train_labels),
y = train_labels)
print(class_wts)
#%%
# convert class weights to tensor
weights= torch.tensor(class_wts,dtype=torch.float)
weights = weights.to(device)
# loss function
cross_entropy = nn.CrossEntropyLoss(weight=weights)
# number of training epochs
epochs = 10
model = model.to(device)
#%% md
## Fine-Tune BERT
So far, we have described the model's architecture, specified the optimizer and loss function, and prepared the dataloaders. Now we must define a couple of functions to train (fine-tune) and evaluate the model, respectively.
#%%
# function to train the model
def train():
model.train()
total_loss, total_accuracy = 0, 0
# empty list to save model predictions
total_preds=[]
# iterate over batches
for step,batch in enumerate(train_dataloader):
# progress update after every 50 batches.
if step % 50 == 0 and not step == 0:
print(' Batch {:>5,} of {:>5,}.'.format(step, len(train_dataloader)))
# push the batch to gpu
batch = [r.to(device) for r in batch]
sent_id, mask, labels = batch
# clear previously calculated gradients
model.zero_grad()
# get model predictions for the current batch
preds = model(sent_id, mask)
# compute the loss between actual and predicted values
loss = cross_entropy(preds, labels)
# add on to the total loss
total_loss = total_loss + loss.item()
# backward pass to calculate the gradients
loss.backward()
# clip the the gradients to 1.0. It helps in preventing the exploding gradient problem
torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0)
# update parameters
optimizer.step()
# model predictions are stored on GPU. So, push it to CPU
preds=preds.detach().cpu().numpy()
# append the model predictions
total_preds.append(preds)
# compute the training loss of the epoch
avg_loss = total_loss / len(train_dataloader)
# predictions are in the form of (no. of batches, size of batch, no. of classes).
# reshape the predictions in form of (number of samples, no. of classes)
total_preds = np.concatenate(total_preds, axis=0)
#returns the loss and predictions
return avg_loss, total_preds
#%%
# function for evaluating the model
def evaluate():
print("\nEvaluating...")
# deactivate dropout layers
model.eval()
total_loss, total_accuracy = 0, 0
# empty list to save the model predictions
total_preds = []
# iterate over batches
for step,batch in enumerate(val_dataloader):
# Progress update every 50 batches.
if step % 50 == 0 and not step == 0:
# Calculate elapsed time in minutes.
elapsed = format_time(time.time() - t0)
# Report progress.
print(' Batch {:>5,} of {:>5,}.'.format(step, len(val_dataloader)))
# push the batch to gpu
batch = [t.to(device) for t in batch]
sent_id, mask, labels = batch
# deactivate autograd
with torch.no_grad():
# model predictions
preds = model(sent_id, mask)
# compute the validation loss between actual and predicted values
loss = cross_entropy(preds,labels)
total_loss = total_loss + loss.item()
preds = preds.detach().cpu().numpy()
total_preds.append(preds)
# compute the validation loss of the epoch
avg_loss = total_loss / len(val_dataloader)
# reshape the predictions in form of (number of samples, no. of classes)
total_preds = np.concatenate(total_preds, axis=0)
return avg_loss, total_preds
#%% md
## Start Model Training
#%%
# set initial loss to infinite
best_valid_loss = float('inf')
weights= torch.tensor(class_wts,dtype=torch.float)
weights = weights.to(device)
# loss function
print(class_wts)
# empty lists to store training and validation loss of each epoch
train_losses=[]
valid_losses=[]
#for each epoch
for epoch in range(epochs):
print('\n Epoch {:} / {:}'.format(epoch + 1, epochs))
#train model
train_loss, _ = train()
#evaluate model
valid_loss, _ = evaluate()
#save the best model
if valid_loss < best_valid_loss:
best_valid_loss = valid_loss
torch.save(model.state_dict(), 'saved_weights.pt')
# append training and validation loss
train_losses.append(train_loss)
valid_losses.append(valid_loss)
print(f'\nTraining Loss: {train_loss:.3f}')
print(f'Validation Loss: {valid_loss:.3f}')
I've ensured that all tensors are correctly moved to the CPU before calling the quantum node and then moved back to the GPU as necessary. Despite this, the error suggests there's still a mismatch in device allocation.
Could you please help me understand what might be causing this issue and how to resolve it? Am I missing a step in correctly handling device allocation between PennyLane and PyTorch tensors?
Thank you for your assistance!
pytorch 2.2.0
pennylane-0.35.1 pennylane-lightning-0.35.1
cuda 4060
.