Hello,

Here example: Quantum transfer learning

Why make this operation? The tutorial say " A constant `np.pi/2.0`

scaling", what is special about this?

Thanks you

Hello,

Here example: Quantum transfer learning

Why make this operation? The tutorial say " A constant `np.pi/2.0`

scaling", what is special about this?

Thanks you

Hey @wing_chen,

There’s a factor of \pi / 2 being multiplied by the output of `torch.tanh`

(range of -1 to 1) in the `forward`

function. This variable, `q_in`

, then goes into `quantum_net`

, where it then gets used as the angle of rotation for a `qml.RY`

gate. So, the angles that `qml.RY`

will receive are [-\pi/2, \pi/2], which covers the range of Pauli Y rotations. It’s a similar thing to why when you integrate over something in spherical coordinates, the domain of the azimuthal angle that you integrate over is [0, \pi] (equivalent to [-\pi / 2, \pi / 2]).

Hope that helps!

Thanks you. I have anothr question.

If making quantum convolution, and output is qml.sample or qml.exp_vals, should I still use np.pi/2?

Hey @wing_chen,

I’m not sure I understand the question exactly. But, if your quantum neural network has `RY`

gates and you want to ensure that they receive angles that cover the full range, then the same interval can be used