Binary Classification Keras Fraud Detection

I am trying to understand this code: Quantum Binary Classification

I tried using this model on a different dataset and got 0.69 as validation and training loss. How can I reduce this loss further?

Code used is same as above, dataset values look like:

Thank you!

What was your dataset? I ran the same code on Breast Cancer Wisconsin dataset and was getting accuracy in the 60%s. I believe it’s because the data is highly skewed with some features having values less than 1 and some exceeding 1000. Pre-processing of principal component analysis or other methods may help.

Plus, trying different optimizers with different learning rates may help.

1 Like

Hi, thank you for your response. I used Borderline SMOTE to oversample the data and scaled the data using Robust Scaler. The dataset shown in the image above is the output of oversampling and scaling.

Yet the accuracy I get is in the range of 40-50% and loss is 0.69 for both training and validation.

In my case, the fraudulent credit card dataset and MNIST worked well and Breast Cancer Wisconsin and Cifar10 didn’t.

Try enlarging the output size of the classical network and using more qumodes. A link to the codes working with networks using more qumodes.


Thank you, I’ll check this out. What do wires mean in the init_layer function? What is their purpose?

Init-layer: data encoding circuit to convert classical features (in real numbers) to quantum states (in complex numbers).

Wires correspond qumodes (photon channels) instead of qubits in the continuous variable model of quantum neural networks.

I checked the link to the codes working with networks using more qumodes. Will I need to wait for 100 epochs to finish to get the desired training loss?

Quantum Binary classification takes an hour or so for each epoch. After completing the first 6 epochs, the loss is equal to 0.6911.

Makes sense, thank you.

For the output layer, why did you choose 14 neurons instead of 1 neuron with sigmoid activation function? Is it not binary classification? And could the loss be changed to binary_crossentropy? The loss is MSE currently.

The output from the classical layers are used as parameters of the encoding circuit.