Hybrid Network not differentiating

Hi it’s me again,

This topic got a bit out of focus because I didn’t get good training results with amplitude encoding (using transfer learning). But now I found a good approach, so the question for differentiability of amplitude state preparation is on the table again.
I sum up what options we considered, so maybe we can just make this an open issue?

-MottonenStatePreparation:

The git repository for the cast fix isn’t available. But the suggested fix in the file works. Are the changes already incorporated into pennyLane?
With that fixed, the values become NaN directly.
From docs: “Due to non-trivial classical processing of the state vector, this template is not always fully differentiable.”

-qml.QubitStateVector: decomposes into Möttönen-method -> same problem

-qml.templates.AmplitudeEmbedding
I didn’t tried that one because the docs clearly state non differentiability. What is even the differnce between those three methods?

-My own embedding attempt based on https://www.nature.com/articles/s41598-021-85474-1.pdf?origin=ppub
Pre-processing is involved (findAngles), but can be made differentiable with pyTorch. Problem are the MultiControlledRy-rotations (currently implemented by qml.ControlledQubitUnitary (not differentiable!).

I am also pretty confused, because here Differentiation with AmplitudeEmbedding the same problem seemingly got solved by making inputs a keyword argument, which doesn’t work for me at all (in fact leaving inputs as non-keyword argument works completely fine with autograd when not using amplitude encoding).

I looked into Qiskits initializer-class
https://qiskit.org/documentation/_modules/qiskit/extensions/quantum_initializer/initializer.html

but I’m not sure if this is a new method/would be differentiable if implemented in pennylane:
“Note that Initialize is an Instruction and not a Gate since it contains a reset instruction, which is not unitary.”