Can someone please suggest a way to do regression using this great library? thanks
Response by @nathan:
Thanks for your interest in the library.
I attach a brief toy example below for how to do linear regression in PennyLane. Hopefully it can be modified to suit your needs
import pennylane as qml from pennylane import numpy as np x = np.linspace(-1,1,10) np.random.seed(0) m,b = 0.5, 1.2 y_data = m * x + b + 0.1 * np.random.randn(10) def y_pred(weights): return weights * x + weights def cost(weights): y_pred_ = y_pred(weights) mse = np.mean((y_data - y_pred_) ** 2) return mse opt = qml.GradientDescentOptimizer(0.5) init_weights = [0.0, 0.0] weights = init_weights for step in range(20): weights = opt.step(cost, weights) print(cost(weights)) import matplotlib.pyplot as plt plt.scatter(x,y_data) plt.plot(x, y_pred(weights),'r')
thanks a lot
i have another question can you please tell me the difference between quantum machine learning toolbox and pennylane
i think they are very similar
Hi @kareem_essafty! Welcome.
PennyLane and the QMLT fulfill two different use cases.
Strawberry Fields, our quantum-optics based quantum computing simulator, provides a backend written in TensorFlow. What this means is, you can leverage the existing auto-differentiation capabilities of TensorFlow to perform optimization and machine learning using quantum simulations.
Note that the auto-differentiation is entirely classical, as the quantum simulation is performed classically using NumPy/TensorFlow.
The QMLT provides a ‘user-interface’ to the Tensorflow backend of Strawberry Fields that automates/abstracts away a lot of the TensorFlow commands. The relationship between QMLT and Strawberry Fields is similar to the relationship between Keras and TensorFlow — a high-level library that for optimization and machine learning, and utilizes a lower level backend.
This is useful for exploring ML and quantum algorithms, but is potentially slow and inefficient.
PennyLane, on the other hand, is a framework for hybrid classical-quantum computation on near-term quantum hardware. It allows you to define your model, using a mixture of classical processing and quantum operations, and then performs backpropagation through the model, to automatically determine the gradient for optimization.
The key difference is this: during backpropagation, when PennyLane arrives at a quantum node, it uses the quantum device directly to both efficiently and analytically determine the gradient. This uses new, theoretical results for quantum computation. Plus, you can mix and match quantum devices from different hardware vendors in the same computation.
As a result, PennyLane allows you to use near term quantum hardware for QML. There are some consequences of this, on the other hand – whereas QMLT allows you to train based on the amplitudes of the quantum state at any point in the simulation, PennyLane restricts you to the expectation values of quantum circuits; outputs that are physically realizable.
Hope that clears it up!
thanks a lot for this great explanation.
I’m right now using machine learning to do some regression on raw physical data and i’d like to use pennylane to perform quantum machine learning. I want to ask for your opinion and please correct me, the most important thing is to determine the right amount of layers and the appropriate gates to update the weights is that right or there is something else i should think of?
besides that, i wanna know how to cite pennylane correctly and if you please i think I and my colleague want to publish a notebook as a tutorial is that possible?
Yes, that sounds like a good starting point In general, there is usually a bit of a trade off – the more layers you use, the higher the chance of a favorable optimization landscape that allows you to find a local minimum. However, if using a simulator device, this can result in a significant increase of computational resources.
Of course, this can be mitigated by using a hardware device instead!
Another thing to make sure of is that your layers are composed of the appropriate types of gates. For instance, for a photonic/CV QNode, you will require a non-linear gate (such as the Kerr gate or the cubic phase gate).
I and my colleague want to publish a notebook as a tutorial is that possible?
Of course! We’d love to see the results of your work using PennyLane. Where/how do you intend to publish the notebook?
well this is actually the result for my job i used the notebook of quantum neural network and I modified the layers and the number of wires
and i used rmsprop optimizer which as i expected performed very well.
I believe after publishing the paper my colleague will allow to publish the notebook because the data is him.
by the way this performed better than a multi layer perceptron. But there is a series question
in Keras the rmsprop decay parameter is zero but in pennylane it’s set to 0.9 which is way too much. can you explain this to me?
The decay is inspired by the value used for tensorflow’s rmsprop. But as you surely saw it is a user-defined value, and it is easy to define your own, for example using
opt = RMSPropOptimizer(decay=0.2)
aha thank you for your response. I know that CV is better than the gate based or qubit for continuous variables “regression problems” but is there any theoretical explanation for this?
can information encoding as described in your book be the solution for this?
I hope I understand your question correctly:
In terms of asymptotic complexity, the computational models are known to be equivalent. But of course, when gate counts matter, having one qmode to encode a continuous value can be a lot more compact than a lot of qubits that are required for a certain precision, or a lot of gates required to do “amplitude encoding”.
In general, the power of information encoding is very context dependent though, and there is a lot still to uncover!
actually i spent more than two days to figure out the right parameters for the qubit and it turned out to be not so good. i was disappointed, but i learnt a little about photons processing from
And i used your notebook about quantum neural networks and it turned out great but with slight modifications and enhancements for non linear data.
the cutoff parameter helped me a lot with regularization and keeping my neural network from being overfitted.
or does it do another job? i just need to confirm my findings @Maria_Schuld @josh