Hi everyone,
I have a question regarding lightning.gpu when building hybrid models.
I’m using keras to build a Sequential model with a quantum layer, e.g. something like:
qlayer = qml.qnnKerasLayer(circuit, weight_shapes, output_dim=16)
clayer = tf.keras.Conv2D(10, 3, strides=2, padding='valid', activation='relu')
flatten = tf.keras.Flatten()
dense = tf.keras.layers.Dense(81)
reshape = tf.keras.layers.Reshape((9,9,1))
out = tf.keras.layers.Conv2D(1, 2, strides=1, padding='same', activation='sigmoid')
model = tf.keras.models.Sequential([clayer, flatten, qlayer, dense, reshape, out])
model.compile(opt, loss="bce)
model.train(x_train, x_train, epochs=5)
Where circuit is some quantum circuit.
When I use adjoint differentiation for the circuit with lightning.qubit everything works fine.
When I use parameter-shift with lightning.gpu it works as well.
But when I try to use adjoint differentiation with lightning.gpu I get the following error:
Error Trace
PLException Traceback (most recent call last)
/tmp/ipykernel_176419/2056065315.py in
1 es = tf.keras.callbacks.EarlyStopping(monitor=‘val_loss’, patience=2,min_delta=0.0001)
----> 2 fitting = model.fit(x_train_small, x_train_small, epochs=20, batch_size=50, steps_per_epoch=50, validation_data=(x_test_small, x_test_small))~/miniconda3/envs/tfqf/lib/python3.7/site-packages/keras/utils/traceback_utils.py in error_handler(*args, kwargs)
65 except Exception as e: # pylint: disable=broad-except
66 filtered_tb = _process_traceback_frames(e.traceback)
—> 67 raise e.with_traceback(filtered_tb) from None
68 finally:
69 del filtered_tb~/miniconda3/envs/tfqf/lib/python3.7/site-packages/pennylane/qnn/keras.py in call(self, inputs)
300 reconstructor =
301 for x in tf.unstack(inputs):
→ 302 reconstructor.append(self.call(x))
303 return tf.stack(reconstructor)
304~/miniconda3/envs/tfqf/lib/python3.7/site-packages/pennylane/qnn/keras.py in call(self, inputs)
303 return tf.stack(reconstructor)
304
→ 305 return self._evaluate_qnode(inputs)
306
307 def _evaluate_qnode(self, x):~/miniconda3/envs/tfqf/lib/python3.7/site-packages/pennylane/qnn/keras.py in _evaluate_qnode(self, x)
318 {k: 1.0 * w for k, w in self.qnode_weights.items()},
319 }
→ 320 return self.qnode(**kwargs)
321
322 def compute_output_shape(self, input_shape):~/miniconda3/envs/tfqf/lib/python3.7/site-packages/pennylane/qnode.py in call(self, *args, **kwargs)
665 gradient_kwargs=self.gradient_kwargs,
666 override_shots=override_shots,
→ 667 **self.execute_kwargs,
668 )
669~/miniconda3/envs/tfqf/lib/python3.7/site-packages/pennylane/interfaces/execution.py in execute(tapes, device, gradient_fn, interface, mode, gradient_kwargs, cache, cachesize, max_diff, override_shots, expand_fn, max_expansion, device_batch_transform)
442
443 res = _execute(
→ 444 tapes, device, execute_fn, gradient_fn, gradient_kwargs, _n=1, max_diff=max_diff, mode=_mode
445 )
446~/miniconda3/envs/tfqf/lib/python3.7/site-packages/pennylane/interfaces/tensorflow.py in execute(tapes, device, execute_fn, gradient_fn, gradient_kwargs, _n, max_diff, mode)
87 with qml.tape.Unwrap(*tapes):
88 # Forward pass: execute the tapes
—> 89 res, jacs = execute_fn(tapes, **gradient_kwargs)
90
91 for i, tape in enumerate(tapes):~/miniconda3/envs/tfqf/lib/python3.7/contextlib.py in inner(*args, **kwds)
72 def inner(*args, **kwds):
73 with self._recreate_cm():
—> 74 return func(*args, **kwds)
75 return inner
76Tom Magorsch, [25.08.22 11:58]
~/miniconda3/envs/tfqf/lib/python3.7/site-packages/pennylane/_device.py in execute_and_gradients(self, circuits, method, **kwargs)
552 # gradient computation (if applicable).
553 res.append(self.batch_execute([circuit])[0])
→ 554 jacs.append(gradient_method(circuit, **kwargs))
555
556 return res, jacs~/miniconda3/envs/tfqf/lib/python3.7/site-packages/pennylane_lightning_gpu/lightning_gpu.py in adjoint_jacobian(self, tape, starting_state, use_device_state)
299 tp_shift = [i - 1 for i in tp_shift]
300
→ 301 jac = adj.adjoint_jacobian(self._gpu_state, obs_serialized, ops_serialized, tp_shift)
302 jac = np.array(jac) # only for parameters differentiable with the adjoint method
303 jac = jac.reshape(-1, len(tp_shift))PLException: Exception encountered when calling layer “keras_layer” (type KerasLayer).
[/pennylane-lightning-gpu/pennylane_lightning_gpu/src/simulator/StateVectorCudaManaged.hpp][Line:200][Method:StateVectorCudaManaged]: Error in PennyLane Lightning: custatevec not initialized
Call arguments received:
• inputs=tf.Tensor(shape=(50, 81), dtype=float64)
Training the same circuit with adjoint diff. on lightning.gpu but without embedding it in a keras hybrid model things work as well, so I suppose the problem should not lie in the circuit but in the keras integration.
Any ideas would be greatly appreciated!
Greetings
Tom