Hi!

I am working on a Quantum Neural Network using PennyLane with PyTorch interface,

My training data is a simple MNIST classical data, which I scale it to 4x4, My idea was to use AmplitudeEmbedding and encode the 16 features into a 4-qubit system and implement the rest of the network.

However, AmplitudeEmbedding(Along with QubitStateVector) functions are non-differentiable and I am stuck here, what is causing this problem and how can I solve it?

The thing is, AngleEmbedding works fine, however that way I embed my features into 16-qubit system and it takes forever to optimize.

One quick note is, I do not actually want the AmplitudeEmbedding layer to be optimized, since it is only helping to create the quantum input. Therefore, I do not actually need this node within my autograd graph.

```
#Error message:
Cannot differentiate with respect to argument(s) {'inputs[12]', 'inputs[0]', 'inputs[11]', 'inputs[2]', 'inputs[4]', 'inputs[5]', 'inputs[7]', 'inputs[3]', 'inputs[6]', 'inputs[9]', 'inputs[13]', 'inputs[10]', 'inputs[15]', 'inputs[14]', 'inputs[1]', 'inputs[8]'}
#Code
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
@qml.qnode(dev)
def my_circuit(inputs,weights):
self.embed(inputs)
for i in range(n_qubits):
# implementation of the circuit
return qml.expval(qml.PauliZ(0)), qml.expval(qml.PauliZ(1)),qml.expval(qml.PauliZ(2)), qml.expval(qml.PauliZ(3))
@qml.template
def embed(self,inputs):
qml.QubitStateVector(inputs, wires = range(n_qubits))
```