Do you have any ideas about how to construct Recurrent Networks?

```
Thanks!
Wei
```

Hi @cubicgate,

Since PennyLane connects to both PyTorch and TensorFlow, you could use the recurrent network modules of either of those packages to create recurrent networks, and potentially include a quantum element.

Yes, I understand. You could begin with the RNN modules in either of those two ML libraries, then create a new subclass of these which has a quantum circuit at its heart. If you were to build the RNN by hand, it would be a lot more work. Those libraries will handle the RNN part, and PennyLane can be used for the quantum part

@nathan this sounds very interesting to me. do you know any resources i can refer to build upon this idea ? Thanks !

Hi @vijpandaturtle,

You can check out some PyTorch tutorials related to RNNs here:

- https://pytorch.org/tutorials/beginner/nlp/sequence_models_tutorial.html
- https://pytorch.org/tutorials/intermediate/char_rnn_classification_tutorial.html

For TensorFlow, there are some tutorials here:

Oh no i was looking for resources related to quantum circuits used in rnns

There arenâ€™t any that I know of

oh i see. thanks anyway !

Hi,

Iâ€™m trying to build a QRNN with Tensorflow for the prediction of a simple time series. Here is my code:

```
from numpy import array
from keras.models import Sequential
import numpy as np
import tensorflow as tf
from tensorflow.keras.layers import LSTM, LSTMCell, Input, RNN, Dense
from tensorflow.keras.models import Model
tf.random.set_seed(0)
def split_sequence(sequence, n_steps):
X, y = list(), list()
for i in range(len(sequence)):
# find the end of this pattern
end_ix = i + n_steps
# check if we are beyond the sequence
if end_ix > len(sequence)-1:
break
# gather input and output parts of the pattern
seq_x, seq_y = sequence[i:end_ix], sequence[end_ix]
X.append(seq_x)
y.append(seq_y)
return array(X), array(y)
# define input sequence
raw_seq = [10, 20, 30, 40, 50, 60, 70, 80, 90]
# choose a number of time steps
n_steps = 3
# split into samples
X, y = split_sequence(raw_seq, n_steps)
# reshape from [samples, timesteps] into [samples, timesteps, features]
n_features = 1
X = X.reshape((X.shape[0], X.shape[1], n_features))
class customLSTM(LSTM):
def __init__(self, **kwargs):
super(customLSTM, self).__init__(**kwargs)
def build(self, input_shape):
super(customLSTM, self).build(input_shape)
def compute_output_shape(self, input_shape):
return input_shape
# define model
inputs1 = Input(shape=(3,1))
lstm1 = customLSTM(units=1, activation='relu')(inputs1)
model = Model(inputs=inputs1, outputs=lstm1)
opt = tf.keras.optimizers.Adam(learning_rate=0.1)
model.compile(opt, loss='mse')
model.fit(X, y, epochs=1000)
x_input = array([70.0, 80.0, 90.0])
x_input = x_input.reshape((1, n_steps, n_features))
yhat = model.predict(x_input)
print(yhat)
```

In words, I created a subclass of Keras LSTM, which currently works as the â€śoriginalâ€ť one. If I now want to insert a quantum circuit in it as suggested by @nathan, how should I modify it? Iâ€™m aware of the existence of `KerasLayer`

and I already used it in other experiments, but I donâ€™t know how to exploit it in this case.

Thanks in advance!

Hey @valeria!

One way you could insert a quantum circuit is just to add a `KerasLayer`

that follows the standard `LSTM`

layer, e.g., something like:

```
intermediate_dim = 4
layers = 2
dev = qml.device("default.qubit", wires=intermediate_dim)
@qml.qnode(dev)
def qnode(inputs, weights):
qml.templates.AngleEmbedding(inputs, wires=range(intermediate_dim))
qml.templates.StronglyEntanglingLayers(weights, wires=range(intermediate_dim))
return [qml.expval(qml.PauliZ(i)) for i in range(intermediate_dim)]
qlayer = qml.qnn.KerasLayer(qnode, {"weights": (layers, intermediate_dim, 3)}, output_dim=intermediate_dim)
inputs1 = Input(shape=(3,1))
lstm1 = LSTM(units=intermediate_dim, activation='relu')(inputs1)
lstm2 = qlayer(lstm1)
out = Dense(units=1, activation="relu")(lstm2)
model = Model(inputs=inputs1, outputs=out)
```

I tried this but was not getting much luck in training, possibly due to the outputs of the QNode being [-1, 1] which should be rescaled. Note that you can train successfully with

```
inputs1 = Input(shape=(3,1))
lstm1 = LSTM(units=intermediate_dim, activation='relu')(inputs1)
# lstm2 = qlayer(lstm1)
out = Dense(units=1, activation="relu")(lstm1)
```

so it should just be a case of getting the `qlayer`

to integrate there.

In the code block you shared, youâ€™re trying to do the more ambitious task of making the LSTM layer itself contain a quantum circuit. Iâ€™d say this is more of an open research question and itâ€™s not clear to me how the quantum circuit would be incorporated. However, maybe there are some ideas on the literature?

1 Like

Thanks for your reply, @Tom_Bromley!

I tried to predict the same input data following your suggestion, in particular I defined:

```
n_modes = 1
n_if = n_modes * (n_modes - 1) // 2
fraction_train = 0.75
depth = 1
sdev = 0.05
cutoff = 5
reg_strength = 1
n_epochs = 400
dev = qml.device("strawberryfields.fock", wires=n_modes, cutoff_dim=cutoff, analytic=False)
@qml.qnode(dev)
def qnode(inputs, theta_1,phi_1,varphi_1,r,phi_r,theta_2,phi_2,varphi_2,a,phi_a,k):
qml.templates.DisplacementEmbedding(inputs, wires=range(n_modes), method='phase', c=0.1)
qml.templates.layers.CVNeuralNetLayers(theta_1,phi_1,varphi_1,r,phi_r,theta_2,phi_2,varphi_2,a,phi_a,k, wires=range(n_modes))
return [qml.expval(qml.X(wires = w)) for w in range(n_modes)]
weight_shapes = {"theta_1":[depth, n_if],"phi_1":[depth, n_if],"varphi_1":[depth, n_modes],"r":[depth, n_modes],
"phi_r":[depth, n_modes],"theta_2":[depth, n_if],"phi_2":[depth, n_if],"varphi_2":[depth, n_modes],
"a":[depth, n_modes],"phi_a":[depth, n_modes],"k":[depth, n_modes]}
unif_init = tf.random_uniform_initializer(minval=0, maxval=np.pi*2, seed=42)
normal_init = tf.random_normal_initializer(mean=0.0, stddev=sdev, seed=42)
reg = tf.keras.regularizers.l2(reg_strength)
weight_specs = {"theta_1": {"initializer": unif_init, "trainable": True}, "phi_1": {"initializer": unif_init, "trainable": True},
"varphi_1": {"initializer": unif_init, "trainable": True}, "r": {"initializer": normal_init, "trainable": True, "regularizer": reg},
"phi_r": {"initializer": unif_init, "trainable": True}, "theta_2": {"initializer": unif_init, "trainable": True},
"phi_2": {"initializer": unif_init, "trainable": True}, "varphi_2": {"initializer": unif_init, "trainable": True},
"a": {"initializer": normal_init, "trainable": True, "regularizer": reg}, "phi_a": {"initializer": unif_init, "trainable": True},
"k": {"initializer": unif_init, "trainable": True, "regularizer": reg}}
qlayer = qml.qnn.KerasLayer(qnode, weight_shapes, weight_specs=weight_specs, output_dim=n_modes)
clayer1 = tf.keras.layers.LSTM(50, activation='relu', input_shape=(n_steps, n_features))
clayer2 = tf.keras.layers.Dense(1, activation='relu')
clayer3 = tf.keras.layers.Dense(1, activation='relu')
model = tf.keras.models.Sequential([clayer1, clayer2, qlayer, clayer3])
opt = tf.keras.optimizers.Adam(learning_rate=0.1)
model.compile(opt, loss='mse')
model.fit(X, y, epochs=n_epochs)
```

However, the results are not goodâ€¦ Instead, removing the `qlayer`

results in a more efficient and accurate algorithm: maybe it is simply not convenient to use a quantum layer in this case.

Regarding my previous question on how to insert a quantum circuit inside LSTM, I have no clue how to do it and unfortunately I didnâ€™t find anything helpful on the net, but I can try to think about it.

Thanks again

Thanks! I just took a look at the code and it runs fine for me, but I agree that the resulting trained model is not very accurate. I would guess a couple of possible factors that are influencing performance:

- The cutoff dimension - itâ€™s possible that the model is learning to push outside of the cutoff dimension. This could be helped with increasing the cutoff at a cost of greater training time.
- The
`clayer1`

to`clayer2`

transition from 50 dimensions to 1 might be a bit too much of a compression, although I understand the choice of 1 is to match the number of modes in the qlayer. Perhaps an intermediate classical layer of, e.g., 10 neurons might help, again at a cost of increased training time. - The number of modes in the qlayer could also be increased, but this will result in the biggest overhead in training time. Perhaps two modes might be a compromise.

One option to counter increased training time could be to switch to an all-Gaussian based simulation and use `strawberryfields.gaussian`

, which runs faster. This Gaussian limitation is quite a restriction, but it might let us see that at least the model can train with a quantum layer.

maybe it is simply not convenient to use a quantum layer in this case

Yes, something to always keep in mind! It might not make sense at this point. Perhaps a future run on hardware with multiple modes might be more performant.

1 Like

@Tom_Bromley, thanks for your answer! I will try to modify the code following your suggestions and really hope to see an improvement

1 Like

yo, I was able to find this research paper regarding qRNN written by the xanadu team and MIT

I wanted to know if this is purely theoretical work, or if at any point this was implemented in code

Hi @cosmic!

Unfortunately this work has not been implemented using PL/SF. The underlying algorithm involves implementation of HHL, which has not yet been added to PL due to a focus on near-term variational quantum circuits.

Oh I see

thanks for the information!

1 Like