In the variational classifier demo why was the 2 feature data from IRIS padded to make 4 features before calculating the angles?

Hi @Sarvapriya_Tripathi , welcome to the Forum!

Padding and normalizing can help to create a better separation between the datapoints as you can see in the following images. Does this answer your question?

Thanks for the quick answer. My confusion came from the following segment where two extra features are created:

```
data = np.loadtxt("variational_classifier/data/iris_classes1and2_scaled.txt")
X = data[:, 0:2]
print("First X sample (original) :", X[0])
# pad the vectors to size 2^2 with constant values
padding = 0.3 * np.ones((len(X), 1))
X_pad = np.c_[np.c_[X, padding], np.zeros((len(X), 1))]
print("First X sample (padded) :", X_pad[0])
# normalize each input
normalization = np.sqrt(np.sum(X_pad ** 2, -1))
X_norm = (X_pad.T / normalization).T
print("First X sample (normalized):", X_norm[0])
# angles for state preparation are new features
features = np.array([get_angles(x) for x in X_norm], requires_grad=False)
print("First features sample :", features[0])
```

These two extra features are used to calculate 5 angles as per this:

```
def get_angles(x):
beta0 = 2 * np.arcsin(np.sqrt(x[1] ** 2) / np.sqrt(x[0] ** 2 + x[1] ** 2 + 1e-12))
beta1 = 2 * np.arcsin(np.sqrt(x[3] ** 2) / np.sqrt(x[2] ** 2 + x[3] ** 2 + 1e-12))
beta2 = 2 * np.arcsin(
np.sqrt(x[2] ** 2 + x[3] ** 2)
/ np.sqrt(x[0] ** 2 + x[1] ** 2 + x[2] ** 2 + x[3] ** 2)
)
return np.array([beta2, -beta1 / 2, beta1 / 2, -beta0 / 2, beta0 / 2])
```

And now these 5 angles are used for state preparation as per this:

```
def statepreparation(a):
qml.RY(a[0], wires=0)
qml.CNOT(wires=[0, 1])
qml.RY(a[1], wires=1)
qml.CNOT(wires=[0, 1])
qml.RY(a[2], wires=1)
qml.PauliX(wires=0)
qml.CNOT(wires=[0, 1])
qml.RY(a[3], wires=1)
qml.CNOT(wires=[0, 1])
qml.RY(a[4], wires=1)
qml.PauliX(wires=0)
```

I am sure I am missing some concept/theory behind this, hence my question. How does creating two extra features relate to the five angles for the statepreparation?

Hi @Sarvapriya_Tripathi ,

This is a great question indeed.

This follows a specific encoding choice mentioned right before the `get_angles`

function.

The circuit is coded according to the scheme in Möttönen, et al. (2004), or—as presented for positive vectors only—in Schuld and Petruccione (2018). We had to also decompose controlled Y-axis rotations into more basic circuits following Nielsen and Chuang (2010).

So this is not something that you would always do, but it can help (and in this case it indeed helps a lot) to improve the separation between the classes. Taking a look at the paper can give you a deeper mathematical insight into why this is the case.

I hope this helps!

Thanks for the reply Catalina! I will go through those papers.