Hi everyone! I’m new to Pennylane, and I’m trying to solve ODEs by adapting this and this tutorial. As you will see, I have essentially changed the loss function.
Let’s say that I want to solve: y'(x) = x, with y(0)=y_0. Let QNN(x)=y(x) be the output of my circuit. We then define the following loss function L:
L = \sqrt{(QNN(0)-y_0)^2} + \sqrt{\sum_i \left(\frac{dQNN(x_i)}{dx} - y'(x_i)\right)^2}
We want to minimize L.
Here’s the code that I’m currently using:
import pennylane as qml
import tensorflow as tf
from pennylane import numpy as np
from pennylane.optimize import AdamOptimizer
dev_fock = qml.device("strawberryfields.fock", wires=1, cutoff_dim=10)
#for a single qumode, we define the following layer of our variational circuit
def layer(v):
# Matrix multiplication of input layer
qml.Rotation(v[0], wires=0)
qml.Squeezing(v[1], 0.0, wires=0)
qml.Rotation(v[2], wires=0)
# Bias
qml.Displacement(v[3], 0.0, wires=0)
# Element-wise nonlinear transformation
qml.Kerr(v[4], wires=0)
@qml.qnode(dev_fock, diff_method="parameter-shift")
def quantum_neural_net(var, x):
# Encode input x into quantum state
qml.Displacement(x, 0.0, wires=0)
# "layer" subcircuits
for v in var:
layer(v)
return qml.expval(qml.X(0))
#here we initialize the weights
np.random.seed(0)
num_layers = 4
var_init = 0.05 * np.random.randn(num_layers, 5, requires_grad=True)
var = var_init
#variables
f0 = 0
qnn0 = quantum_neural_net(var,0)
inf_s = np.sqrt(np.finfo(np.float32).eps)
learning_rate = 0.01
training_steps = 5000
batch_size = 100
display_step = 500
# Given EDO
def f(x):
return x
def qnn(x):
return quantum_neural_net(var,x)
# Custom loss function to approximate the derivatives
def custom_loss(x):
summation = []
summation.append((qnn0-f0)**2)
for x in np.linspace(-1,1,10):
dQNN = (qnn(x+inf_s)-qnn(x))/inf_s
summation.append((dQNN - f(x))**2)
return tf.sqrt(tf.reduce_mean(tf.abs(summation)))
opt = AdamOptimizer(0.01, beta1=0.9, beta2=0.999)
X = np.linspace(-1, 1, 50)
Y = np.linspace(-1, 1, 50)
for it in range(50):
(var, _, _), _cost = opt.step_and_cost(custom_loss, X) #error here
print("Iter: {:5d} | Cost: {:0.7f} ".format(it, _cost))
#output of our model
x_pred = np.linspace(-1, 1, 50)
predictions = [quantum_neural_net(var, x_) for x_ in x_pred]
So, I’m having problems when dealing with my custom_loss function. Is there a feasible way to fix it? Also, if you know a different approach and could share it with me, that would be great as well
Thanks in advance!