Two devices found in transfer learning example

Hi all,

Right now I am trying to use pennylane on a different machine with python3.9. Here are the specs:

Name: PennyLane
Version: 0.33.1
Summary: PennyLane is a Python quantum machine learning library by Xanadu Inc.
Home-page: https://github.com/PennyLaneAI/pennylane
Author: 
Author-email: 
License: Apache License 2.0
Location: /usr/local/lib/python3.9/dist-packages
Requires: appdirs, autograd, autoray, cachetools, networkx, numpy, pennylane-lightning, requests, rustworkx, scipy, semantic-version, toml, typing-extensions
Required-by: PennyLane-Lightning

Platform info:           Linux-5.4.0-166-generic-x86_64-with-glibc2.31
Python version:          3.9.5
Numpy version:           1.26.1
Scipy version:           1.11.3
Installed devices:
- default.gaussian (PennyLane-0.33.1)
- default.mixed (PennyLane-0.33.1)
- default.qubit (PennyLane-0.33.1)
- default.qubit.autograd (PennyLane-0.33.1)
- default.qubit.jax (PennyLane-0.33.1)
- default.qubit.legacy (PennyLane-0.33.1)
- default.qubit.tf (PennyLane-0.33.1)
- default.qubit.torch (PennyLane-0.33.1)
- default.qutrit (PennyLane-0.33.1)
- null.qubit (PennyLane-0.33.1)
- lightning.qubit (PennyLane-Lightning-0.33.1)

To make sure that it was working properly, I tried running the transfer learning example from the demos: https://pennylane.ai/qml/demos/tutorial_quantum_transfer_learning/

When I run it, I get the following result:

Training started:
Traceback (most recent call last):
  File "/home/justinsinger/tutorial_quantum_transfer_learning.py", line 553, in <module>
    model_hybrid = train_model(
  File "/home/justinsinger/tutorial_quantum_transfer_learning.py", line 487, in train_model
    outputs = model(inputs)
  File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.9/dist-packages/torchvision/models/resnet.py", line 285, in forward
    return self._forward_impl(x)
  File "/usr/local/lib/python3.9/dist-packages/torchvision/models/resnet.py", line 280, in _forward_impl
    x = self.fc(x)
  File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/justinsinger/tutorial_quantum_transfer_learning.py", line 384, in forward
    q_out_elem = torch.hstack(quantum_net(elem, self.q_params)).float().unsqueeze(0)
  File "/usr/local/lib/python3.9/dist-packages/pennylane/qnode.py", line 1027, in __call__
    res = qml.execute(
  File "/usr/local/lib/python3.9/dist-packages/pennylane/interfaces/execution.py", line 616, in execute
    results = inner_execute(tapes)
  File "/usr/local/lib/python3.9/dist-packages/pennylane/interfaces/execution.py", line 249, in inner_execute
    return cached_device_execution(tapes)
  File "/usr/local/lib/python3.9/dist-packages/pennylane/interfaces/execution.py", line 371, in wrapper
    res = list(fn(tuple(execution_tapes.values()), **kwargs))
  File "/usr/local/lib/python3.9/dist-packages/pennylane/devices/default_qubit.py", line 474, in execute
    results = tuple(
  File "/usr/local/lib/python3.9/dist-packages/pennylane/devices/default_qubit.py", line 475, in <genexpr>
    simulate(
  File "/usr/local/lib/python3.9/dist-packages/pennylane/devices/qubit/simulate.py", line 269, in simulate
    state, is_state_batched = get_final_state(circuit, debugger=debugger, interface=interface)
  File "/usr/local/lib/python3.9/dist-packages/pennylane/devices/qubit/simulate.py", line 161, in get_final_state
    state = apply_operation(op, state, is_state_batched=is_state_batched, debugger=debugger)
  File "/usr/lib/python3.9/functools.py", line 877, in wrapper
    return dispatch(args[0].__class__)(*args, **kw)
  File "/usr/local/lib/python3.9/dist-packages/pennylane/devices/qubit/apply_operation.py", line 198, in apply_operation
    return _apply_operation_default(op, state, is_state_batched, debugger)
  File "/usr/local/lib/python3.9/dist-packages/pennylane/devices/qubit/apply_operation.py", line 208, in _apply_operation_default
    return apply_operation_einsum(op, state, is_state_batched=is_state_batched)
  File "/usr/local/lib/python3.9/dist-packages/pennylane/devices/qubit/apply_operation.py", line 100, in apply_operation_einsum
    return math.einsum(einsum_indices, reshaped_mat, state)
  File "/usr/local/lib/python3.9/dist-packages/pennylane/math/multi_dispatch.py", line 539, in einsum
    operands = np.coerce(operands, like=like)
  File "/usr/local/lib/python3.9/dist-packages/autoray/autoray.py", line 80, in do
    return get_lib_fn(backend, fn)(*args, **kwargs)
  File "/usr/local/lib/python3.9/dist-packages/pennylane/math/single_dispatch.py", line 605, in _coerce_types_torch
    raise RuntimeError(
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0, cpu!

Is this the result of a bug in latest release of pennylane, or could there by a problem with my installation in particular?

Hey @justin6626!

There have been a couple other posts about this same issue:

It looks like this issue might be due to the fact that putting torch tensors on a gpu is problematic when lightning-gpu is being used as well. Lightning-gpu and torch’s gpu pipeline are entirely differerent, and lightning-gpu expects the data to be on the host right now, so that should fix it!

Thanks for getting back to me so soon! I applied the solution from the second link that you attached, and it worked!

1 Like

Awesome! Glad I could help :slight_smile: