Hi @Daniel63656,
Are the changes already incorporated into pennyLane?
The fix should be in the latest release (v0.16.0). You can find it in the second entry under Bug fixes in the release notes.
The three state preparations are very similar indeed.
-
QubitStateVector
might be supported natively on a quantum device, and if there’s no need to differentiate it, it’s much quicker to simply use that one instead of decomposing it using the Möttönen state preparation. -
AmplitudeEmbedding
is a template that basically applies aQubitStateVector
operation after doing some preprocessing, such as padding of the state and normalizing it. -
MottonenStatePreparation
uses a method to prepare a specific state according to this paper from Möttönen, et al. which usually can work if the device in question does not have native support for a direct state preparation operation, but it will likely not be as fast.
I am also pretty confused, because here Differentiation with AmplitudeEmbedding the same problem seemingly got solved by making inputs a keyword argument, which doesn’t work for me at all
The syntax for marking differentiable inputs or not has changed, and should be done with a requires_grad
flag when declared, for NumPy and Torch, or declaring the input as a tf.constant
for Tensorflow. You can read more about that on the interfaces page in the documentation.
I hope this clears some things up!