Data encoding on forest.qvm

Hi @josh,

Upgrading the plugin worked and I was able to encode on pyqvm device using the AmplitudeEmbedding method.

I have few queries on the following code snippet which tests the speed of the two devices:

Code:

import pennylane as qml
from pennylane import numpy as np
from pennylane.templates.embeddings import AmplitudeEmbedding
import time

def RY_layer(w):
    for idx, element in enumerate(w):
        qml.RY(element, wires=idx)

def entangling_layer(nqubits):
    for i in range(0, nqubits - 1, 2):  
        qml.CNOT(wires=[i, i + 1])
    for i in range(1, nqubits - 1, 2):  
        qml.CNOT(wires=[i, i + 1])

def circuit(features, weights):
    AmplitudeEmbedding(features, wires = list(range(0,n_qubits)), pad=None, normalize=True)
    for k in range(depth):
        entangling_layer(n_qubits)
        RY_layer(weights[k])

    exp_vals = [qml.expval(qml.PauliZ(position)) for position in range(n_qubits)]
    return tuple(exp_vals)

n_qubits = 11
depth = 50
features = np.random.randint(10, size=(2048,))
weights = (0.001 * np.random.randint(100, size=(depth,n_qubits)))

devices = [qml.device("default.qubit", wires=n_qubits),
           qml.device("forest.qvm", device="{}q-pyqvm".format(n_qubits))]

for dev in devices:
    print("\nDevice: {}".format(dev.name))
    qnode = qml.QNode(circuit, dev)
    start_point = time.perf_counter()
    qnode(features,weights)
    end_point = time.perf_counter()
    print("\n Time in seconds: ", end_point - start_point)

Output:

Device: Default qubit PennyLane plugin
Time in seconds: 36.0572343878448

Device: Forest QVM Device
Time in seconds: 1006.6512296050787

  • I have found from the above result that default.qubit device is performing extremely faster than pyqvm. Is this usual or unusual(ref: benchmarking) ?

  • I also want to experiment optimization of gate parameters in a similar way you did for the Quantum Transfer Learning experiment. What would be the best device to work on?

It would be of great help if you could suggest on this and share any relevant links to further study upon.

Thank you,

Sincerely,
Avinash.