In one PennyLane demo (transfer learning) the circuit is as follows:
@qml.qnode(dev, interface="torch") def quantum_net(q_input_features, q_weights_flat): """ The variational quantum circuit. """ # Reshape weights q_weights = q_weights_flat.reshape(q_depth, n_qubits) # Start from state |+> , unbiased w.r.t. |0> and |1> H_layer(n_qubits) # Embed features in the quantum node RY_layer(q_input_features) # Sequence of trainable variational layers for k in range(q_depth): entangling_layer(n_qubits) RY_layer(q_weights[k]) # Expectation values in the Z basis exp_vals = [qml.expval(qml.PauliZ(position)) for position in range(n_qubits)] return tuple(exp_vals)
In the context of image classification using a quantum neural network (QNN), doesnt it make more sense to use
qml.probs rather than
qml.exp_vals for obtaining the outputs of the QNN? I think so because (1). The probabilities obtained from
qml.probs represent the likelihood of the quantum system being in each computational basis state which aligns well with the probabilistic nature of classification tasks, where each class is assigned a probability or confidence score. By using
qml.probs, I can interpret the QNN’s output as a probability distribution over the different classes.
2. Softmax activation: The probabilities obtained from
qml.probs can be directly used as inputs to a softmax activation function, which is commonly applied in the final layer of a neural network for multi-class classification. The softmax function normalizes the probabilities and ensures they sum up to 1, providing a meaningful representation of class probabilities.
Kindly let me know where I am wrong here,