I have been working on the fraud detection code. I made sure that I am using same version as mentioned in requirements.txt but still I am getting errors (specifically in restoring the session in the testing part of the code). @Tom_Bromley gave a great suggestion of upgrading the code using pennylane plugins and using function fitting demo as a resource. But the fraud detection code is more complex than function fitting as it uses hybrid classical and quantum layers with many parameters. So, my question is can I upgrade the fraud detection code without using tensorflow? Also, any help or suggestions are welcome, thank you!!
In principle you should be able to port the fraud detection code (using old SF + TF) to PennyLane, either using the autograd/numpy interface with the fock device from the PL-SF plugin, or using the tensorflow interface with the TF device from the PL-SF plugin. I’d recommend the TF route, since the device is capable of backpropagation which should make for much faster training.
Thank you for the reply Tom but can you help or give some resources on how this interface work when we are dealing with classical layers? Maybe this is something obvious but I am new to the field and trying to figure it out. Thanks!
let me try to jump in here, because what you intend to do is quite challenging for a newbie
PennyLane is essentially just providing the results or gradients of quantum computations, and you can build the classical models around these “quantum blackboxes” whichever way you like. If you decide to code your ML pipeline in TensorFlow, you just need to make sure you use
interface="tf" when creating your qnode.
However, often you want the quantum computation to act like a layer of a classical ML model, in which case you can use the QNN module to turn quantum nodes into (for example) a Keras Layer.
It’s a bit hard to say more without looking at code, but if you get stuck you could try to isolate your question to a minimum working example and post it here! Also, I personally find it easier to start from scratch to try and classify fraud detection than translating the
fraud_detection code line by line - you may end up with an even better workflow!
A side note, in the original fraud detection code we used “clip_by_value” which basically just sets parameters that become too large to a maximum value. A more machine learning kind of way to do this would be to add a penalty for large parameters to the cost function, so that the model does even want parameters to be large. (As you probably know, the reason why we do not want large parameters is that they are “unphysical” and require a large
cutoff_dim to simulate things to sufficient accuracy).