Replacements to latest version

Hi all! I would like to solve ODEs with QNNs using Pennylane. Recently I found paper where authors solved regression, classification tasks using QNNs and Pennylane with 0.19 version. Now I would like to make some replacements so that their approach will work with latest version = 0.23. However, I could not documentations of old version and wonder where can I start so that I can compare differences between old and new version?
Thank you in advance

Hi @Aigerim!

Our Release Notes are a good place to find the changes that happen on every release. If you scroll down you will notice that for each release we have a “breaking changes” section. This is probably where you will want to look.

Hi Catalina! Thank you for your response. I appreciate it:smiling_face_with_three_hearts:
For instance I want to check the documentation of following:
pennylane.ops Squeezing, Displacement, Kerr
pennylane.templates.subroutines import Interferometer
and etc. which are used in 0,19 version. And first I would like to check them (what are they used for) and then reproduce it for latest version. As I observed templates and etc are available only for latest version on a site. So I want to know where I can find all functions of release 0,19 to better understand. Actually what I want to do is to solve ODEs with QML and I found MSc Thesis on that topic https://github.com/martin-knudsen/masterThesis. However Author M. Knudsen wrote it in previous version, so first of all I want to understand his proposed approach and then reproduce it to latest version.
Thank you!
Sincerely,
Aigerim

Hi @Aigerim!

Regarding the Squeezing, Displacement and Kerr operators they haven’t changed. If you import pennylane as qml then you can access them through qml.Squeezing, qml.Displacement, and qml.Kerr.

The same is true for the interferometer. It hasn’t been changed and you can now access it throughqml.Interferometer.

Something else you can do is check the ‘Deprecations and Breaking Changes’ section of our release blog posts. In the PennyLane blog you can find all of our release blogs and in the table of contents you can skip to the deprecations section.

If you need any help finding other equivalences to the latest version please let me know!

1 Like

Dear Catalina, thank you very much for your help <3

Hello Catalina! Thank you for your help and I would like to ask how can replace following from pennylane.templates.utils import check_wires, check_number_of_layers, check_shapes to latest version?
Thank you!

Hi @Aigerim, what do you need to do with these checks? Templates have a ‘shape’ method for instance. Are you looking for information on a particular template? Maybe sharing the piece of code where you’re using it can be helpful.

Hi Catalina! I want to change the following which was written in 0,19 to latest:
from pennylane.ops import Squeezing, Displacement, Kerr
from pennylane.templates.subroutines import Interferometer
from pennylane.templates import broadcast
from pennylane.templates.utils import check_wires, check_number_of_layers, check_shapes
from pennylane import device, qnode, expval, X
from pennylane.init import cvqnn_layers_all
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import scale
import sklearn.decomposition
import torch
from torch.autograd import Variable, grad
import numpy as np
import seaborn
import matplotlib.pyplot as plt

Hyperparameters get defined here

In[13]:

number of layers

L=1

number of modes

M=3

number of beamsplitters in each interferometer

K=(M*(M-1))/2

wire indices

wires=[i for i in range(M)]

cutoff

cutoff_dim=4

learning rate

lr = 0.03

training steps

steps = 100

dev = device(“strawberryfields.fock”,wires=M, cutoff_dim=cutoff_dim)

One single layer consisting of 2 interferometers, 1 squeezing, 1 displacement one Kerr nonlinearity

In[14]:

def cv_neural_net_layer(
theta_1,
phi_1,
varphi_1,
r,
phi_r,
theta_2,
phi_2,
varphi_2,
a,
phi_a,
k,
wires):

Interferometer(theta=theta_1, phi=phi_1, varphi=varphi_1, wires=wires)

broadcast(unitary=Squeezing, pattern="single", wires=wires, parameters=list(zip(r, phi_r)))

Interferometer(theta=theta_2, phi=phi_2, varphi=varphi_2, wires=wires)

broadcast(unitary=Displacement, pattern="single", wires=wires, parameters=list(zip(a, phi_a)))

broadcast(unitary=Kerr, pattern="single", wires=wires, parameters=k)

Several layers of CV neural network from PennyLane

In[15]:

def CVNeuralNetLayersHomeMade(
theta_1,
phi_1,
varphi_1,
r,
phi_r,
theta_2,
phi_2,
varphi_2,
a,
phi_a,
k,
wires):

#############
# Input checks
wires = check_wires(wires)

n_wires = len(wires)
n_if = n_wires * (n_wires - 1) // 2
weights_list = [theta_1, phi_1, varphi_1, r, phi_r, theta_2, phi_2, varphi_2, a, phi_a, k]
repeat = check_number_of_layers(weights_list)

expected_shapes = [
    (repeat, n_if),
    (repeat, n_if),
    (repeat, n_wires),
    (repeat, n_wires),
    (repeat, n_wires),
    (repeat, n_if),
    (repeat, n_if),
    (repeat, n_wires),
    (repeat, n_wires),
    (repeat, n_wires),
    (repeat, n_wires),
]
check_shapes(weights_list, expected_shapes, msg="wrong shape of weight input(s) detected")

###############

for l in range(repeat):
    cv_neural_net_layer(
        theta_1=theta_1[l],
        phi_1=phi_1[l],
        varphi_1=varphi_1[l],
        r=r[l],
        phi_r=phi_r[l],
        theta_2=theta_2[l],
        phi_2=phi_2[l],
        varphi_2=varphi_2[l],
        a=a[l],
        phi_a=phi_a[l],
        k=k[l],
        wires=wires)

Hi @Aigerim, you can just remove all of these checks. They shouldn’t affect the basic functionality of the program. You simply have to be careful to always have the right shape for your weights.

Please let me know if you get to make it work after removing these checks.

1 Like

Hello Catalina! Thank you for your quick responses!
last time I asked you about shapes function which showed 11 parameters of CVQNNs and now I would like to know how can I replace cvqnn_layers_all from 0,19 to latest?
Thank you!

Hi @Aigerim,

Maybe the following code will work (I’m not sure though).

shapes = qml.CVNeuralNetLayers.shape(n_layers=2, n_wires=2)
weights = [np.random.random(shape) for shape in shapes]
inits = torch.tensor(weights, dtype=torch.float64) 

Also, it seems that numpy is changing the way you use random numbers so in the future this way of creating random numbers will probably not work (although for now it should).

Please let me know if this works for you!