Variational Quantum Regressor in Pennylane

I am fairly new to Pennylane and Quantum ML. I had recently used the Qiskit Variational Quantum Regressor using angular rotation as feature map and SCA entanglement as ansatz. I was trying a similar workaround in Pennylane that would work fine for neural network regressor.
Can someone please help me around with this?
Thanks in advance


Welcome, thanks for your question! :slightly_smiling_face:

As a small intro, there is a growing number of templates we have in PennyLane, some could potentially be helpful here.

I think we might not have the exact recipes here, but fortunately, there are some tools we can use to create custom implementations. :slight_smile:

For the feature map: could potentially embeddings such as the AngleEmbedding template be useful? A neat example of using rotations to encode features can be found for example in the demonstration on the Variational Classifier.

For the ansatz, as per the Qiskit documentation, I was looking into Sim et al. and in specific at Circuit 14 (there seems to be a small adjustment Qiskit makes, might have to apply that in this example too).

One layer of the ansatz could look something like this:

import pennylane as qml
from pennylane import numpy as np

dev = qml.device('default.qubit', wires=4)

def ansatz(p1, p2):
    for p, w in zip(p1, wires):
        qml.RY(p, wires=w)
    qml.broadcast(unitary=qml.CRY, pattern=[[3,0], [2,3],[1,2], [0,1]], wires=[0,1,2,3], parameters=p2)
    return qml.state()

wires = [0,1,2,3]
p1 = np.linspace(0., np.pi, len(wires))
p2 = np.linspace(0., np.pi, len(wires))
print(qml.draw(ansatz)(p1, p2))
 0: ──RY(0)─────╭RY(0)────────────────────────╭C─────────╭─ State 
 1: ──RY(1.05)──│──────────────────╭C─────────╰RY(3.14)β”€β”€β”œβ”€ State 
 2: ──RY(2.09)──│───────╭C─────────╰RY(2.09)β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”œβ”€ State 
 3: ──RY(3.14)──╰C──────╰RY(1.05)────────────────────────╰─ State 

Here note, that we’ve use qml.broadcast to create the structure of the controlled-Y rotations. Such an operation can be specified in a custom way to create further layers.

Hope this helps a bit, let us know how it goes! :slightly_smiling_face:

thank you so much @antalszava. This clears out on the hyper parameters that can be used for my problem. Also, I wanted to know if we have any pre-defined class as provided in qiskit for neural network regressor or anything as such???

Hi @AKSHITA1, we don’t have a Neural Network Regressor class. You would have to build the functionality piece by piece as needed.

Please let us know how it goes! And feel free to ask any other questions you may have.

Sure, thankyou so much @CatalinaAlbornoz

No problem @AKSHITA1!

So, I started with a dataset that has 3 features of len 1413 datapoints and for featuremap I am using the AngleEmbedding.

Idk how to encode the x_train data of shape (1413,3) . Will embedding the data something like this help:

 for i in range(len(x_train)):
     qml.templates.AngleEmbedding(x_train[i], wires=range(3), rotation='Y')

My ansatz with 4 layers looks like this-

for p, w in zip(p1, wires):
    qml.RY(p, wires=w)
qml.broadcast(unitary=qml.CZ, pattern=[[0,2], [0,1],[1,2]], wires=[0,1,2])

for p, w in zip(p1, wires):
    qml.RY(p, wires=w)
qml.broadcast(unitary=qml.CZ, pattern=[[1,2],[0,2], [0,1]], wires=[0,1,2])

for p, w in zip(p1, wires):
    qml.RY(p, wires=w)
qml.broadcast(unitary=qml.CZ, pattern=[[0,1],[1,2],[0,2]], wires=[0,1,2])

for p, w in zip(p1, wires):
    qml.RY(p, wires=w)
qml.broadcast(unitary=qml.CZ, pattern=[[0,2], [0,1],[1,2]], wires=[0,1,2])

Now, I am finding difficulty on how to concat feature map and ansatz for the further steps of loss function and optimisation steps.

Please pardon me if this question sounds really silly but I am really new in to quantum.

Thankyou :slightly_smiling_face:


Thank you for asking this question!

The embedding and ansatz look right.

The way you concatenate them is generally that you apply them one after the other inside your qnode.

If you look at the variational classifier demo you will notice that within the qnode you have the embedding and the different layers.

Then you can create separate functions (as in the demo) to define your loss function.

Why don’t you try to use the structure and some of the functions of the variational classifier demo and let me know how it goes?

Also please let me know if this answered your question or if I understood it wrong!

Thankyou @CatalinaAlbornoz for the reply, yes, it did solve a lot of my queries but I’m now stuck at the last step of measuring the qubits. Since, we don’t have any ancilla here, I tried evaluating the 3 qubits using return [qml.expval(qml.PauliZ(i)) for i in range(num_qubits)], but got an error TypeError: Grad only applies to real scalar-output functions. Try jacobian, elementwise_grad or holomorphic_grad. Upon trying to find more in that I got to understand that I was receiving a vector result due to the loop used.

I wanted to know how can that be resolved so that I get an output resultant of the three qubits but gives no error.

P.S. I was trying to replicate everything exactly the same as done in Qiskit VQR.


I’m glad my previous answer was helpful.

Regarding your last question, you could use the tensor product of the different observables you want to measure. If you have 3 qubits you can do:

return qml.expval(qml.PauliZ(0)@qml.PauliZ(1)@qml.PauliZ(2))

You can read more about this kind of measurement here in the docs.

I hope this helps you!

1 Like

Heyy @CatalinaAlbornoz and @antalszava, been a long time. I was little busy with this. Thank you sooooo much for all the help and support. I was able to get the regressor implemented on pennylane. Thankyou :slightly_smiling_face::slightly_smiling_face::slightly_smiling_face:

Hi @AKSHITA1, I’m very glad to hear that you got it working!

It would be amazing if you could contribute it to PennyLane either as a community demo or a new feature.

Please let us know if you need any help in the process!

1 Like