Hi Maria @Maria_Schuld,
Here is my question: where does the nonlinearity come from in QNN? My understanding is that:
- Feature embedding layer introduces nonlinearity.
- Unentangled parametrical gates do not introduce any nonlinearity, is this right?
- Entangling is nonlinear. But can it involve strong nonlinearity? Let’s say, by simply adopting several StronglyEntanglingLayers, is it possible to mimic a strongly nonlinear boundary in classification?
- Measurements introduce nonlinearity. I see this in papers but actually I didn’t understand the logic.
I would really appreciate it if you could briefly explain, or guide me to some referefences. Thanks in advance!
Hey @zhouyf10, and thanks for migrating the question to a new thread!
I assume you mean nonlinearity of the QML model (say, the expectation of a variational circuit + measurement) in the data, right?
Let’s be precise and go through each step. If you embed inputs, creating a quantum state |\phi(x) \rangle, then the amplitudes \phi_i = \langle i | \phi(x)\rangle are in general nonlinear in x, yes. The obvious exception would be AmplitudeEmbedding
, which is designed so that the amplitudes are the inputs.
If you then do some more unitary evolution, for example by applying a parametrized circuit U(\theta), the amplitudes of the new state |\phi'(x) \rangle = U(\theta)| \phi(x) \rangle will be linear in the amplitudes of |\phi(x) \rangle (that’s trivial, since U is a linear operation on the state vector). So any standard computation after the embedding does not add nonlinearity, which means that embedded inputs have the same distance to each other before and after U is applied. Whether there is entanglement or not does not play a role at all.
Finally, the measurement will again introduce a “slight” (quadratic) nonlinearity in the amplitudes of the final state: \langle \phi'(x)| M|\phi'(x) \rangle = \sum_{ij} M_{ij} \phi'_i(x) \phi'_j(x).
In this sense one can say that the source of nonlinearity with regards to the inputs is only the embedding and the measurement. But obviously a parametrized gate like a Pauli rotation produces a state which is nonlinear in the parameters. So a sentence like “xyz is (non)linear” only makes sense if we state what object is (non)linear in what variable…
Hope this helps?
3 Likes
Thanks Maria @Maria_Schuld ! That really helps. A following-up question is that does the nonlinearity in parameters make sense for ML? Since ML looks for a mapping from input features x to output y, using of course the parameterized blocks (neurons or gates), I feel only the nonlinearity in x contributes. Please correct me if I am wrong. Thank you.
Yes, the nonlinearity in the parameters will have a more severe effect on the training landscape than the class of models one can learn…
1 Like