Two devices found in transfer learning example

Hi all,

Right now I am trying to use pennylane on a different machine with python3.9. Here are the specs:

Name: PennyLane
Version: 0.33.1
Summary: PennyLane is a Python quantum machine learning library by Xanadu Inc.
License: Apache License 2.0
Location: /usr/local/lib/python3.9/dist-packages
Requires: appdirs, autograd, autoray, cachetools, networkx, numpy, pennylane-lightning, requests, rustworkx, scipy, semantic-version, toml, typing-extensions
Required-by: PennyLane-Lightning

Platform info:           Linux-5.4.0-166-generic-x86_64-with-glibc2.31
Python version:          3.9.5
Numpy version:           1.26.1
Scipy version:           1.11.3
Installed devices:
- default.gaussian (PennyLane-0.33.1)
- default.mixed (PennyLane-0.33.1)
- default.qubit (PennyLane-0.33.1)
- default.qubit.autograd (PennyLane-0.33.1)
- default.qubit.jax (PennyLane-0.33.1)
- default.qubit.legacy (PennyLane-0.33.1)
- (PennyLane-0.33.1)
- default.qubit.torch (PennyLane-0.33.1)
- default.qutrit (PennyLane-0.33.1)
- null.qubit (PennyLane-0.33.1)
- lightning.qubit (PennyLane-Lightning-0.33.1)

To make sure that it was working properly, I tried running the transfer learning example from the demos:

When I run it, I get the following result:

Training started:
Traceback (most recent call last):
  File "/home/justinsinger/", line 553, in <module>
    model_hybrid = train_model(
  File "/home/justinsinger/", line 487, in train_model
    outputs = model(inputs)
  File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.9/dist-packages/torchvision/models/", line 285, in forward
    return self._forward_impl(x)
  File "/usr/local/lib/python3.9/dist-packages/torchvision/models/", line 280, in _forward_impl
    x = self.fc(x)
  File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/justinsinger/", line 384, in forward
    q_out_elem = torch.hstack(quantum_net(elem, self.q_params)).float().unsqueeze(0)
  File "/usr/local/lib/python3.9/dist-packages/pennylane/", line 1027, in __call__
    res = qml.execute(
  File "/usr/local/lib/python3.9/dist-packages/pennylane/interfaces/", line 616, in execute
    results = inner_execute(tapes)
  File "/usr/local/lib/python3.9/dist-packages/pennylane/interfaces/", line 249, in inner_execute
    return cached_device_execution(tapes)
  File "/usr/local/lib/python3.9/dist-packages/pennylane/interfaces/", line 371, in wrapper
    res = list(fn(tuple(execution_tapes.values()), **kwargs))
  File "/usr/local/lib/python3.9/dist-packages/pennylane/devices/", line 474, in execute
    results = tuple(
  File "/usr/local/lib/python3.9/dist-packages/pennylane/devices/", line 475, in <genexpr>
  File "/usr/local/lib/python3.9/dist-packages/pennylane/devices/qubit/", line 269, in simulate
    state, is_state_batched = get_final_state(circuit, debugger=debugger, interface=interface)
  File "/usr/local/lib/python3.9/dist-packages/pennylane/devices/qubit/", line 161, in get_final_state
    state = apply_operation(op, state, is_state_batched=is_state_batched, debugger=debugger)
  File "/usr/lib/python3.9/", line 877, in wrapper
    return dispatch(args[0].__class__)(*args, **kw)
  File "/usr/local/lib/python3.9/dist-packages/pennylane/devices/qubit/", line 198, in apply_operation
    return _apply_operation_default(op, state, is_state_batched, debugger)
  File "/usr/local/lib/python3.9/dist-packages/pennylane/devices/qubit/", line 208, in _apply_operation_default
    return apply_operation_einsum(op, state, is_state_batched=is_state_batched)
  File "/usr/local/lib/python3.9/dist-packages/pennylane/devices/qubit/", line 100, in apply_operation_einsum
    return math.einsum(einsum_indices, reshaped_mat, state)
  File "/usr/local/lib/python3.9/dist-packages/pennylane/math/", line 539, in einsum
    operands = np.coerce(operands, like=like)
  File "/usr/local/lib/python3.9/dist-packages/autoray/", line 80, in do
    return get_lib_fn(backend, fn)(*args, **kwargs)
  File "/usr/local/lib/python3.9/dist-packages/pennylane/math/", line 605, in _coerce_types_torch
    raise RuntimeError(
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0, cpu!

Is this the result of a bug in latest release of pennylane, or could there by a problem with my installation in particular?

Hey @DarthMalloc!

There have been a couple other posts about this same issue:

It looks like this issue might be due to the fact that putting torch tensors on a gpu is problematic when lightning-gpu is being used as well. Lightning-gpu and torch’s gpu pipeline are entirely differerent, and lightning-gpu expects the data to be on the host right now, so that should fix it!

Thanks for getting back to me so soon! I applied the solution from the second link that you attached, and it worked!

1 Like

Awesome! Glad I could help :slight_smile: