Multi dimensional Regression using CV-QNN

Can anyone tell me how to do multi dimensional regression (like we do with ANNs /Linear Regression). In my understanding we have to generalize the code in curve fitting tutorial . But i am having a hard time inputting multi dimensional inputs.

Hi @P.Kairon! Do you have a small code example you could share to illustrate your multi dimensional inputs?


This is the boston housing dataset. I want to predict output column using F1,F2,F3.

import pennylane as qml
from pennylane import numpy as np
from pennylane.optimize import AdamOptimizer
from numpy import *  # get the "numpy" library for linear algebra
from math import *
import pandas as pd
from sklearn.model_selection import train_test_split
from pennylane.templates.layers import CVNeuralNetLayers
from pennylane.init import cvqnn_layers_all
from pennylane.templates.embeddings import DisplacementEmbedding

data = np.loadtxt("sine.txt")

from sklearn import preprocessing

mm_scaler = preprocessing.MinMaxScaler()
data = mm_scaler.fit_transform(data)

print(data.shape)

X = data[:, 0:3]
Y = data[:, 3]

xtr, xt, ytr, yt = train_test_split(X, Y, test_size=0.3)
dev = qml.device("strawberryfields.fock", wires=3, cutoff_dim=50)


def circuit(*pars):
    CVNeuralNetLayers(*pars, wires=[0, 2])


@qml.qnode(dev)
def quantum_neural_net(*var, x1=None, x2=None, x3=None):
    # Encode input x into quantum state
    qml.Displacement(x1, 0.0, wires=0)
    qml.Displacement(x2, 0.0, wires=1)
    qml.Displacement(x3, 0.0, wires=2)

    for v in var:
        circuit(*v)

    return qml.expval(qml.X(0))


def square_loss(labels, predictions):
    loss = 0
    for l, p in zip(labels, predictions):
        loss = loss + (l - p) ** 2

    loss = loss / len(labels)
    return loss


def cost(var, features, labels):
    preds = [quantum_neural_net(var, x1=xlo[0], x2=xlo[1], x3=xlo[2]) for xlo in features]
    return square_loss(labels, preds)


init_pars = cvqnn_layers_all(n_layers=1, n_wires=3, seed=None)
print(init_pars)
opt = AdamOptimizer(0.01, beta1=0.9, beta2=0.999)

var = init_pars

for it in range(50):
    var = opt.step(lambda v: cost(v, xtr, ytr), var)
    print("Iter: {:5d} | Cost: {:0.7f} ".format(it + 1, cost(var, xtr, ytr)))

predics = [quantum_neural_net(var, x1=xlo[0], x2=xlo[1], x3=xlo[2]) for xlo in xt]

import matplotlib.pyplot as plt

plt.figure(figsize=(20, 10))
plt.plot(predics)
plt.plot(yt)

I keep getting worng input shape detected error when i run the code.
Also could you tell me how to do same code in data reuploading way

Hi @P.Kairon, I can’t seem to execute your code example, I get the error

Traceback (most recent call last):
  File "test2.py", line 50, in <module>
    Y = data[:, 3]
IndexError: index 3 is out of bounds for axis 1 with size 2

Since I can’t execute your code I can’t verify for sure the solution, but it looks like your QNode is calling circuit incorrectly; a for loop should not be required if we redefine the ansatz:

def circuit(pars):
    CVNeuralNetLayers(*pars, wires=[0, 2])

@qml.qnode(dev)
def quantum_neural_net(*var, x1=None, x2=None, x3=None):
    # Encode input x into quantum state
    qml.Displacement(x1, 0.0, wires=0)
    qml.Displacement(x2, 0.0, wires=1)
    qml.Displacement(x3, 0.0, wires=2)

    circuit(var)

    return qml.expval(qml.X(0))

Apologies @P.Kairon, looks like there was a typo in my previous reply. The QNode should look like this:

def circuit(pars):
   CVNeuralNetLayers(*pars, wires=[0, 2])

@qml.qnode(dev)
def quantum_neural_net(var, x1=None, x2=None, x3=None):
   # Encode input x into quantum state
   qml.Displacement(x1, 0.0, wires=0)
   qml.Displacement(x2, 0.0, wires=1)
   qml.Displacement(x3, 0.0, wires=2)

   circuit(var)

   return qml.expval(qml.X(0))

I’ve attached a working Jupyter notebook example below if you want to explore it further!

Untitled.ipynb (11.6 KB)

1 Like

Hey @josh thanks for the help it runs fine . But it is very slow , since i can’t switch simulators (correct me if i’m wrong here) .
Is there any way to speed up computation ?
Can i interface it with TF ?
My thought is to run it on GPU but not sure how to do that .

Hi @P.Kairon, glad it’s now working! Unfortunately, the Strawberry Fields Fock backend is quite computationally intensive — the memory required for a simulation scales like D^N, for a cutoff of D and N wires.You can try decreasing the cutoff or the number of wires to see a speed improvement.

Strawberry Fields itself now comes with a simulator backend that supports TensorFlow 2.0, which works on GPUs. This is not yet supported via PennyLane, but we are working on integrating this backend!

Hi @josh , thanks for the info. I reached out to Aroosa previously regarding a similar problem. She suggested using data reuploading to encode all the 3 features into a single wire. Could you please suggest and if possible demonstrate , necessary changes in the code that i need to make to do that ?

Hi @P.Kairon,

It would be an interesting problem to try this with data reuploading technique :slight_smile: See our tutorial and this paper for more details.

You can simply change the code above as follows:

dev = qml.device('strawberryfields.fock', wires=1, cutoff_dim=1)


def circuit(pars):
    CVNeuralNetLayers(*pars, wires=0)

@qml.qnode(dev)
def quantum_neural_net(var, x1=None,x2=None,x3=None):

    qml.Displacement(x1, 0.0, wires=0)
    qml.Displacement(x2, 0.0, wires=0)
    qml.Displacement(x3, 0.0, wires=0)
    
    circuit(var)
    return qml.expval(qml.X(0))

The cost function can stay the same.

You can try and vary the number of layers in CVNeuralNetLayers to see if your model works better. Another trick is to re-upload data as follows:

dev = qml.device('strawberryfields.fock', wires=1, cutoff_dim=1)


def layer(pars):
    CVNeuralNetLayers(*pars, wires=0)

@qml.qnode(dev)
def quantum_neural_net(var1, var2, x1=None,x2=None,x3=None):

    qml.Displacement(x1, 0.0, wires=0)
    qml.Displacement(x2, 0.0, wires=0)
    qml.Displacement(x3, 0.0, wires=0)
    
    circuit(var1)

    qml.Displacement(x1, 0.0, wires=0)
    qml.Displacement(x2, 0.0, wires=0)
    qml.Displacement(x3, 0.0, wires=0)
    
    circuit(var2)
    return qml.expval(qml.X(0))

Hope this helps!

Thanks alot @AroosaIjaz . First method worked , however we have to increase the cutoff else cost function remains the same. Although second method is giving errors

Also model is performing really bad as cost function plateaus around 0.02 sometimes and even starts climbing up. Could you please take a look at the uploaded code(data uploaded as well),(sorry for the troubles , i have a deadline coming up :sweat:. @AroosaIjaz @josh.

https://drive.google.com/drive/folders/13cK5lmcmEWvY1JZ68lyRdAZ9-iHghCX8?usp=sharing

Hi @P.Kairon

We’re glad to hear that @AroosaIjaz’s proposed method worked. Indeed, you’ll need to set the cutoff dimension to a larger number to ensure numerical accuracy/stability, the value of 1 is just a placeholder.

You mentioned that the second method is giving errors. Would you be able to provide the traceback/error msg here so we can try to help?

Regarding how well your model is performing: we can see if we can spot anything obvious, and make suggestions, but we aren’t necessarily able to debug code that is error-free, but just achieves unsatisfactory outputs.

My first guess is that you will need to check whether the cutoff level is sufficiently high to ensure numerical accuracy (this is a tradeoff with the amount of time/memory you need to simulate). Too low a cutoff and you have the risk that some operations—like Displacements, Squeezing, Cubic Phase, etc—push the quantum state outside of the space given by the cutoff. One way to check this would be to see if your state has a trace much less than 1 (equivalently, the expectation value of the identity is much less than 1)

Hi @nathan. Thanks for the tip. I fixed code with Aroosa’s second method it works just fine now( although its doubly slow).
Regarding training query i had was the trend of cost function is quite furtive:

Iter:     1 | Cost: 2.4273307 
Iter:     2 | Cost: 0.2041072 
Iter:     3 | Cost: 0.1400610 
Iter:     4 | Cost: 0.1216191 
Iter:     5 | Cost: 0.1082934 
Iter:     6 | Cost: 0.0991520 
Iter:     7 | Cost: 0.0931570 
Iter:     8 | Cost: 0.0889703 
Iter:     9 | Cost: 0.0857500 
Iter:    10 | Cost: 0.0830959 
Iter:    11 | Cost: 0.0808232 
Iter:    12 | Cost: 0.0788374 
Iter:    13 | Cost: 0.0770803 
Iter:    14 | Cost: 0.0755115 
Iter:    15 | Cost: 0.0741007 
Iter:    16 | Cost: 0.0728246 
Iter:    17 | Cost: 0.0716650 
Iter:    18 | Cost: 0.0706078 
Iter:    19 | Cost: 0.0696417 
Iter:    20 | Cost: 0.0687574 
Iter:    21 | Cost: 0.0679475 
Iter:    22 | Cost: 0.0672052 
Iter:    23 | Cost: 0.0665246 
Iter:    24 | Cost: 0.0659002 
Iter:    25 | Cost: 0.0653266 
Iter:    26 | Cost: 0.0647990 
Iter:    27 | Cost: 0.0643127 
Iter:    28 | Cost: 0.0638634 
Iter:    29 | Cost: 0.0634470 
Iter:    30 | Cost: 0.0630599 
Iter:    31 | Cost: 0.0626989 
Iter:    32 | Cost: 0.0623612 
Iter:    33 | Cost: 0.0620442 
Iter:    34 | Cost: 0.0617456 
Iter:    35 | Cost: 0.0614637 
Iter:    36 | Cost: 0.0611967 
Iter:    37 | Cost: 0.0609432 
Iter:    38 | Cost: 0.0607021 
Iter:    39 | Cost: 0.0604721 
Iter:    40 | Cost: 0.0602524 
Iter:    41 | Cost: 0.0600422 
Iter:    42 | Cost: 0.0598408 
Iter:    43 | Cost: 0.0596474 
Iter:    44 | Cost: 0.0594616 
Iter:    45 | Cost: 0.0592829 
Iter:    46 | Cost: 0.0591108 
Iter:    47 | Cost: 0.0589449 
Iter:    48 | Cost: 0.0587848 
Iter:    49 | Cost: 0.0586302 
Iter:    50 | Cost: 0.0584808 
Iter:    51 | Cost: 0.0583364 
Iter:    52 | Cost: 0.0581967 
Iter:    53 | Cost: 0.0580613 
Iter:    54 | Cost: 0.0579302 
Iter:    55 | Cost: 0.0578031 
Iter:    56 | Cost: 0.0576799 
Iter:    57 | Cost: 0.0575603 
Iter:    58 | Cost: 0.0574441 
Iter:    59 | Cost: 0.0573313 
Iter:    60 | Cost: 0.0572217 
Iter:    61 | Cost: 0.0571151 
Iter:    62 | Cost: 0.0570114 
Iter:    63 | Cost: 0.0569105 
Iter:    64 | Cost: 0.0568123 
Iter:    65 | Cost: 0.0567166 
Iter:    66 | Cost: 0.0566233 
Iter:    67 | Cost: 0.0565324 
Iter:    68 | Cost: 0.0564438 
Iter:    69 | Cost: 0.0563573 
Iter:    70 | Cost: 0.0562728 
Iter:    71 | Cost: 0.0561904 
Iter:    72 | Cost: 0.0561098 
Iter:    73 | Cost: 0.0560310 
Iter:    74 | Cost: 0.0559540 
Iter:    75 | Cost: 0.0558786 
Iter:    76 | Cost: 0.0558049 
Iter:    77 | Cost: 0.0557326 
Iter:    78 | Cost: 0.0556619 
Iter:    79 | Cost: 0.0555925 
Iter:    80 | Cost: 0.0555245 
Iter:    81 | Cost: 0.0554578 
Iter:    82 | Cost: 0.0553923 
Iter:    83 | Cost: 0.0553281 
Iter:    84 | Cost: 0.0552652 
Iter:    85 | Cost: 0.0552037 
Iter:    86 | Cost: 0.0551445 
Iter:    87 | Cost: 0.0550894 
Iter:    88 | Cost: 0.0550442 
Iter:    89 | Cost: 0.0550218 
Iter:    90 | Cost: 0.0550725 
Iter:    91 | Cost: 0.0552614 
Iter:    92 | Cost: 0.0560460 
Iter:    93 | Cost: 0.0570890 
Iter:    94 | Cost: 0.0618568 
Iter:    95 | Cost: 0.0581636 
Iter:    96 | Cost: 0.0635504 
Iter:    97 | Cost: 0.0564256 
Iter:    98 | Cost: 0.0580931 
Iter:    99 | Cost: 0.0564514 
Iter:   100 | Cost: 0.0582472 
Iter:   101 | Cost: 0.0563340 
Iter:   102 | Cost: 0.0580289 
Iter:   103 | Cost: 0.0561804 
Iter:   104 | Cost: 0.0577591 
Iter:   105 | Cost: 0.0560424 
Iter:   106 | Cost: 0.0575518 
Iter:   107 | Cost: 0.0559257 
Iter:   108 | Cost: 0.0574009 
Iter:   109 | Cost: 0.0558230 
Iter:   110 | Cost: 0.0572796 
Iter:   111 | Cost: 0.0557270 
Iter:   112 | Cost: 0.0571685 
Iter:   113 | Cost: 0.0556342 
Iter:   114 | Cost: 0.0570599 
Iter:   115 | Cost: 0.0555434 
Iter:   116 | Cost: 0.0569529 
Iter:   117 | Cost: 0.0554550 
Iter:   118 | Cost: 0.0568490 
Iter:   119 | Cost: 0.0553692 
Iter:   120 | Cost: 0.0567494 
Iter:   121 | Cost: 0.0552862 
Iter:   122 | Cost: 0.0566542 
Iter:   123 | Cost: 0.0552057 
Iter:   124 | Cost: 0.0565631 
Iter:   125 | Cost: 0.0551276 
Iter:   126 | Cost: 0.0564758 
Iter:   127 | Cost: 0.0550515 
Iter:   128 | Cost: 0.0563917 
Iter:   129 | Cost: 0.0549774 
Iter:   130 | Cost: 0.0563108 
Iter:   131 | Cost: 0.0549051 
Iter:   132 | Cost: 0.0562326 
Iter:   133 | Cost: 0.0548343 
Iter:   134 | Cost: 0.0561571 
Iter:   135 | Cost: 0.0547650 
Iter:   136 | Cost: 0.0560840 
Iter:   137 | Cost: 0.0546971 
Iter:   138 | Cost: 0.0560131 
Iter:   139 | Cost: 0.0546303 
Iter:   140 | Cost: 0.0559443 
Iter:   141 | Cost: 0.0545646 
Iter:   142 | Cost: 0.0558774 
Iter:   143 | Cost: 0.0544999 
Iter:   144 | Cost: 0.0558121 
Iter:   145 | Cost: 0.0544360 
Iter:   146 | Cost: 0.0557484 
Iter:   147 | Cost: 0.0543728 
Iter:   148 | Cost: 0.0556860 
Iter:   149 | Cost: 0.0543102 
Iter:   150 | Cost: 0.0556248 
Iter:   151 | Cost: 0.0542481 
Iter:   152 | Cost: 0.0555647 
Iter:   153 | Cost: 0.0541864 
Iter:   154 | Cost: 0.0555055 
Iter:   155 | Cost: 0.0541249 
Iter:   156 | Cost: 0.0554471 
Iter:   157 | Cost: 0.0540636 
Iter:   158 | Cost: 0.0553892 
Iter:   159 | Cost: 0.0540023 
Iter:   160 | Cost: 0.0553319 
Iter:   161 | Cost: 0.0539410 
Iter:   162 | Cost: 0.0552748 
Iter:   163 | Cost: 0.0538794 
Iter:   164 | Cost: 0.0552178 
Iter:   165 | Cost: 0.0538175 
Iter:   166 | Cost: 0.0551609 
Iter:   167 | Cost: 0.0537552 
Iter:   168 | Cost: 0.0551038 
Iter:   169 | Cost: 0.0536923 
Iter:   170 | Cost: 0.0550463 
Iter:   171 | Cost: 0.0536286 
Iter:   172 | Cost: 0.0549882 
Iter:   173 | Cost: 0.0535640 
Iter:   174 | Cost: 0.0549294 
Iter:   175 | Cost: 0.0534984 
Iter:   176 | Cost: 0.0548696 
Iter:   177 | Cost: 0.0534316 
Iter:   178 | Cost: 0.0548086 
Iter:   179 | Cost: 0.0533634 
Iter:   180 | Cost: 0.0547462 
Iter:   181 | Cost: 0.0532937 
Iter:   182 | Cost: 0.0546820 
Iter:   183 | Cost: 0.0532221 
Iter:   184 | Cost: 0.0546158 
Iter:   185 | Cost: 0.0531485 
Iter:   186 | Cost: 0.0545472 
Iter:   187 | Cost: 0.0530728 
Iter:   188 | Cost: 0.0544760 
Iter:   189 | Cost: 0.0529946 
Iter:   190 | Cost: 0.0544017 
Iter:   191 | Cost: 0.0529137 
Iter:   192 | Cost: 0.0543240 
Iter:   193 | Cost: 0.0528299 
Iter:   194 | Cost: 0.0542425 
Iter:   195 | Cost: 0.0527429 
Iter:   196 | Cost: 0.0541567 
Iter:   197 | Cost: 0.0526525 
Iter:   198 | Cost: 0.0540663 
Iter:   199 | Cost: 0.0525585 
Iter:   200 | Cost: 0.0539707 
Iter:   201 | Cost: 0.0524606 
Iter:   202 | Cost: 0.0538697 
Iter:   203 | Cost: 0.0523587 
Iter:   204 | Cost: 0.0537628 
Iter:   205 | Cost: 0.0522526 
Iter:   206 | Cost: 0.0536497 
Iter:   207 | Cost: 0.0521422 
Iter:   208 | Cost: 0.0535301 
Iter:   209 | Cost: 0.0520275 
Iter:   210 | Cost: 0.0534039 
Iter:   211 | Cost: 0.0519084 
Iter:   212 | Cost: 0.0532711 
Iter:   213 | Cost: 0.0517852 
Iter:   214 | Cost: 0.0531318 
Iter:   215 | Cost: 0.0516579 
Iter:   216 | Cost: 0.0529861 
Iter:   217 | Cost: 0.0515268 
Iter:   218 | Cost: 0.0528346 
Iter:   219 | Cost: 0.0513924 
Iter:   220 | Cost: 0.0526780 
Iter:   221 | Cost: 0.0512549 
Iter:   222 | Cost: 0.0525170 
Iter:   223 | Cost: 0.0511150 
Iter:   224 | Cost: 0.0523525 
Iter:   225 | Cost: 0.0509732 
Iter:   226 | Cost: 0.0521858 
Iter:   227 | Cost: 0.0508299 
Iter:   228 | Cost: 0.0520179 
Iter:   229 | Cost: 0.0506859 
Iter:   230 | Cost: 0.0518501 
Iter:   231 | Cost: 0.0505415 
Iter:   232 | Cost: 0.0516836 
Iter:   233 | Cost: 0.0503974 
Iter:   234 | Cost: 0.0515194 
Iter:   235 | Cost: 0.0502540 
Iter:   236 | Cost: 0.0513586 
Iter:   237 | Cost: 0.0501116 
Iter:   238 | Cost: 0.0512021 
Iter:   239 | Cost: 0.0499705 
Iter:   240 | Cost: 0.0510504 
Iter:   241 | Cost: 0.0498309 
Iter:   242 | Cost: 0.0509039 
Iter:   243 | Cost: 0.0496928 
Iter:   244 | Cost: 0.0507631 
Iter:   245 | Cost: 0.0495564 
Iter:   246 | Cost: 0.0506279 
Iter:   247 | Cost: 0.0494215 
Iter:   248 | Cost: 0.0504982 
Iter:   249 | Cost: 0.0492880 
Iter:   250 | Cost: 0.0503739 
Iter:   251 | Cost: 0.0491560 
Iter:   252 | Cost: 0.0502547 
Iter:   253 | Cost: 0.0490251 
Iter:   254 | Cost: 0.0501402 
Iter:   255 | Cost: 0.0488953 
Iter:   256 | Cost: 0.0500300 
Iter:   257 | Cost: 0.0487664 
Iter:   258 | Cost: 0.0499237 
Iter:   259 | Cost: 0.0486384 
Iter:   260 | Cost: 0.0498210 
Iter:   261 | Cost: 0.0485110 
Iter:   262 | Cost: 0.0497213 
Iter:   263 | Cost: 0.0483843 
Iter:   264 | Cost: 0.0496245 
Iter:   265 | Cost: 0.0482581 
Iter:   266 | Cost: 0.0495301 
Iter:   267 | Cost: 0.0481325 
Iter:   268 | Cost: 0.0494380 
Iter:   269 | Cost: 0.0480073 
Iter:   270 | Cost: 0.0493478 
Iter:   271 | Cost: 0.0478827 
Iter:   272 | Cost: 0.0492594 
Iter:   273 | Cost: 0.0477586 
Iter:   274 | Cost: 0.0491725 
Iter:   275 | Cost: 0.0476350 
Iter:   276 | Cost: 0.0490871 
Iter:   277 | Cost: 0.0475120 
Iter:   278 | Cost: 0.0490029 
Iter:   279 | Cost: 0.0473898 
Iter:   280 | Cost: 0.0489199 
Iter:   281 | Cost: 0.0472682 
Iter:   282 | Cost: 0.0488379 
Iter:   283 | Cost: 0.0471474 
Iter:   284 | Cost: 0.0487569 
Iter:   285 | Cost: 0.0470274 
Iter:   286 | Cost: 0.0486767 
Iter:   287 | Cost: 0.0469084 
Iter:   288 | Cost: 0.0485973 
Iter:   289 | Cost: 0.0467903 
Iter:   290 | Cost: 0.0485186 
Iter:   291 | Cost: 0.0466733 
Iter:   292 | Cost: 0.0484405 
Iter:   293 | Cost: 0.0465575 
Iter:   294 | Cost: 0.0483631 
Iter:   295 | Cost: 0.0464428 
Iter:   296 | Cost: 0.0482862 
Iter:   297 | Cost: 0.0463295 
Iter:   298 | Cost: 0.0482098 
Iter:   299 | Cost: 0.0462174 
Iter:   300 | Cost: 0.0481338 
Iter:   301 | Cost: 0.0461068 
Iter:   302 | Cost: 0.0480584 
Iter:   303 | Cost: 0.0459975 
Iter:   304 | Cost: 0.0479833 
Iter:   305 | Cost: 0.0458898 
Iter:   306 | Cost: 0.0479086 
Iter:   307 | Cost: 0.0457836 
Iter:   308 | Cost: 0.0478343 
Iter:   309 | Cost: 0.0456790 
Iter:   310 | Cost: 0.0477603 
Iter:   311 | Cost: 0.0455760 
Iter:   312 | Cost: 0.0476867 
Iter:   313 | Cost: 0.0454747 
Iter:   314 | Cost: 0.0476135 
Iter:   315 | Cost: 0.0453750 
Iter:   316 | Cost: 0.0475406 
Iter:   317 | Cost: 0.0452771 
Iter:   318 | Cost: 0.0474681 
Iter:   319 | Cost: 0.0451808 
Iter:   320 | Cost: 0.0473959 
Iter:   321 | Cost: 0.0450863 
Iter:   322 | Cost: 0.0473241 
Iter:   323 | Cost: 0.0449935 
Iter:   324 | Cost: 0.0472527 
Iter:   325 | Cost: 0.0449025 
Iter:   326 | Cost: 0.0471817 
Iter:   327 | Cost: 0.0448132 
Iter:   328 | Cost: 0.0471111 
Iter:   329 | Cost: 0.0447256 
Iter:   330 | Cost: 0.0470410 
Iter:   331 | Cost: 0.0446398 
Iter:   332 | Cost: 0.0469713 
Iter:   333 | Cost: 0.0445557 
Iter:   334 | Cost: 0.0469021 
Iter:   335 | Cost: 0.0444733 
Iter:   336 | Cost: 0.0468334 
Iter:   337 | Cost: 0.0443926 
Iter:   338 | Cost: 0.0467652 
Iter:   339 | Cost: 0.0443136 
Iter:   340 | Cost: 0.0466977 
Iter:   341 | Cost: 0.0442362 
Iter:   342 | Cost: 0.0466305 
Iter:   343 | Cost: 0.0441606 
Iter:   344 | Cost: 0.0465643 
Iter:   345 | Cost: 0.0440863 
Iter:   346 | Cost: 0.0464978 
Iter:   347 | Cost: 0.0440142 
Iter:   348 | Cost: 0.0464339 
Iter:   349 | Cost: 0.0439425 
Iter:   350 | Cost: 0.0463667 
Iter:   351 | Cost: 0.0438747 
Iter:   352 | Cost: 0.0463081 
Iter:   353 | Cost: 0.0438036 
Iter:   354 | Cost: 0.0462337 
Iter:   355 | Cost: 0.0437442 
Iter:   356 | Cost: 0.0461949 
Iter:   357 | Cost: 0.0436644 
Iter:   358 | Cost: 0.0460819 
Iter:   359 | Cost: 0.0436332 
Iter:   360 | Cost: 0.0461316 
Iter:   361 | Cost: 0.0435007 
Iter:   362 | Cost: 0.0458281 
Iter:   363 | Cost: 0.0435954 
Iter:   364 | Cost: 0.0463018 
Iter:   365 | Cost: 0.0431958 
Iter:   366 | Cost: 0.0450556 
Iter:   367 | Cost: 0.0438751 
Iter:   368 | Cost: 0.0474983 
Iter:   369 | Cost: 0.0424486 
Iter:   370 | Cost: 0.0425879 
Iter:   371 | Cost: 0.0430327 
Iter:   372 | Cost: 0.0456495 
Iter:   373 | Cost: 0.0437163 
Iter:   374 | Cost: 0.0478882 
Iter:   375 | Cost: 0.0421729 
Iter:   376 | Cost: 0.0421639 
Iter:   377 | Cost: 0.0424097 
Iter:   378 | Cost: 0.0440527 
Iter:   379 | Cost: 0.0448715 
Iter:   380 | Cost: 0.0516406 
Iter:   381 | Cost: 0.0447224 
Iter:   382 | Cost: 0.0427696 
Iter:   383 | Cost: 0.0438083 
Iter:   384 | Cost: 0.0432244 
Iter:   385 | Cost: 0.0458632 
Iter:   386 | Cost: 0.0423672 
Iter:   387 | Cost: 0.0431781 
Iter:   388 | Cost: 0.0434514 
Iter:   389 | Cost: 0.0473440 
Iter:   390 | Cost: 0.0418592 
Iter:   391 | Cost: 0.0417243 
Iter:   392 | Cost: 0.0416557 
Iter:   393 | Cost: 0.0418200 
Iter:   394 | Cost: 0.0426119 
Iter:   395 | Cost: 0.0467820 
Iter:   396 | Cost: 0.0423448 
Iter:   397 | Cost: 0.0451811 
Iter:   398 | Cost: 0.0433522 
Iter:   399 | Cost: 0.0482971 
Iter:   400 | Cost: 0.0416821 
Iter:   401 | Cost: 0.0416682 
Iter:   402 | Cost: 0.0421391 
Iter:   403 | Cost: 0.0432173 
Iter:   404 | Cost: 0.0484561 
Iter:   405 | Cost: 0.0415947 
Iter:   406 | Cost: 0.0416524 
Iter:   407 | Cost: 0.0424602 
Iter:   408 | Cost: 0.0436776 
Iter:   409 | Cost: 0.0496475 
Iter:   410 | Cost: 0.0427500 
Iter:   411 | Cost: 0.0425355 
Iter:   412 | Cost: 0.0447558 
Iter:   413 | Cost: 0.0420706 
Iter:   414 | Cost: 0.0434535 
Iter:   415 | Cost: 0.0428268 
Iter:   416 | Cost: 0.0463069 
Iter:   417 | Cost: 0.0414192 
Iter:   418 | Cost: 0.0413535 
Iter:   419 | Cost: 0.0414895 
Iter:   420 | Cost: 0.0426562 
Iter:   421 | Cost: 0.0438325 
Iter:   422 | Cost: 0.0501627 
Iter:   423 | Cost: 0.0437604 
Iter:   424 | Cost: 0.0418823 
Iter:   425 | Cost: 0.0424679 
Iter:   426 | Cost: 0.0423565 
Iter:   427 | Cost: 0.0448655 
Iter:   428 | Cost: 0.0416498 
Iter:   429 | Cost: 0.0425804 
Iter:   430 | Cost: 0.0427363 
Iter:   431 | Cost: 0.0466744 
Iter:   432 | Cost: 0.0411794 
Iter:   433 | Cost: 0.0410191 
Iter:   434 | Cost: 0.0409096 
Iter:   435 | Cost: 0.0409053 
Iter:   436 | Cost: 0.0413670 
Iter:   437 | Cost: 0.0430228 
Iter:   438 | Cost: 0.0497922 
Iter:   439 | Cost: 0.0425295 
Iter:   440 | Cost: 0.0422904 
Iter:   441 | Cost: 0.0451904 
Iter:   442 | Cost: 0.0412436 
Iter:   443 | Cost: 0.0416045 
Iter:   444 | Cost: 0.0421869 
Iter:   445 | Cost: 0.0458171 
Iter:   446 | Cost: 0.0410338 
Iter:   447 | Cost: 0.0412408 
Iter:   448 | Cost: 0.0418920 
Iter:   449 | Cost: 0.0454691 
Iter:   450 | Cost: 0.0412495 
Iter:   451 | Cost: 0.0425542 
Iter:   452 | Cost: 0.0430006 
Iter:   453 | Cost: 0.0482853 
Iter:   454 | Cost: 0.0419843 
Iter:   455 | Cost: 0.0416725 
Iter:   456 | Cost: 0.0430071 
Iter:   457 | Cost: 0.0419185 
Iter:   458 | Cost: 0.0443079 
Iter:   459 | Cost: 0.0413219 
Iter:   460 | Cost: 0.0423766 
Iter:   461 | Cost: 0.0423210 
Iter:   462 | Cost: 0.0460910 
Iter:   463 | Cost: 0.0408272 
Iter:   464 | Cost: 0.0406550 
Iter:   465 | Cost: 0.0405214 
Iter:   466 | Cost: 0.0404296 
Iter:   467 | Cost: 0.0404438 
Iter:   468 | Cost: 0.0408829 
Iter:   469 | Cost: 0.0437320 
Iter:   470 | Cost: 0.0444360 
Iter:   471 | Cost: 0.0520683 
Iter:   472 | Cost: 0.0468929 
Iter:   473 | Cost: 0.0428927 
Iter:   474 | Cost: 0.0414855 
Iter:   475 | Cost: 0.0410928 
Iter:   476 | Cost: 0.0408398 
Iter:   477 | Cost: 0.0406601 
Iter:   478 | Cost: 0.0405620 
Iter:   479 | Cost: 0.0406743 
Iter:   480 | Cost: 0.0412965 
Iter:   481 | Cost: 0.0447299 
Iter:   482 | Cost: 0.0414719 
Iter:   483 | Cost: 0.0446578 
Iter:   484 | Cost: 0.0411153 
Iter:   485 | Cost: 0.0428915 
Iter:   486 | Cost: 0.0422232 
Iter:   487 | Cost: 0.0464956 
Iter:   488 | Cost: 0.0407464 
Iter:   489 | Cost: 0.0406748 
Iter:   490 | Cost: 0.0410414 
Iter:   491 | Cost: 0.0417974 
Iter:   492 | Cost: 0.0458989 
Iter:   493 | Cost: 0.0404677 
Iter:   494 | Cost: 0.0403388 
Iter:   495 | Cost: 0.0403216 
Iter:   496 | Cost: 0.0407318 
Iter:   497 | Cost: 0.0422271 
Iter:   498 | Cost: 0.0484682 
Iter:   499 | Cost: 0.0414731 
Iter:   500 | Cost: 0.0414681

As it comes down from 2 then oscillates up and down many times while taking very small steps (for relatively high learning rate) and gives weird results, like in photo.

#opt = AdamOptimizer(0.05, beta1=0.9, beta2=0.999)
opt=GradientDescentOptimizer(stepsize=0.05)
#opt = RMSPropOptimizer(stepsize=0.01,decay=0.9)
var1 = init_pars1
#var2 = init_pars2
for it in range(500):
    var1 = opt.step(lambda v: cost(v, x,y), var1)
    print("Iter: {:5d} | Cost: {:0.7f} ".format(it + 1, cost(var1, x, y)))
    if cost(var1, x, y)<0.2:
        opt.update_stepsize(0.02)

So i just wanna know is there anything i can change with the hyperparameters, or how can i adjust learning rate (by tweaking first and second moments of ADAM)