Bug using colab gpu

model_hybrid = train_model(

    model_hybrid, criterion, optimizer_hybrid, exp_lr_scheduler, num_epochs=num_epochs

)

Output:-

Training started:
---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-20-bbd0ce1e21df> in <module>()
      1 model_hybrid = train_model(
----> 2     model_hybrid, criterion, optimizer_hybrid, exp_lr_scheduler, num_epochs=num_epochs
      3 )

2 frames
/usr/local/lib/python3.6/dist-packages/torch/autograd/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables)
     98     Variable._execution_engine.run_backward(
     99         tensors, grad_tensors, retain_graph, create_graph,
--> 100         allow_unreachable=True)  # allow_unreachable flag
    101 
    102 

RuntimeError: Expected object of device type cuda but got device type cpu for argument #2 'mat2' in call to _th_mm

Hi @Neel_Gandhi and welcome to the forum!

Please include a minimal working example of the code you are using so we can best help.

However, from a quick look at what you’ve posted, it looks like you are using the Torch interface and trying to run on GPUs. Running on GPUs is not fully supported in PennyLane, you can check out previous posts, for example:

For now, I’d recommend running on CPU.

Thanks!