Hello!
I want ask how the params.npy variable were obtained? It would be great if you provide the codes for my better understanding.
Hey @nauvan! Welcome to the forum !
We don’t have the code that we used to generate params.npy
. The parameters here are simply obtained from minimizing a cost function akin to what’s presented here.
How to solve this error? This is the script code when forest.qvm and after updating to rigetti.qvm this code errors.
It would help to have your full code (copy-pastable to that I can run it on my end, i.e. not a screenshot) that reproduces that error so I can help better!
That said, it looks like qnodes
is a list full of other QNodes. (more specifically here). You can do something like this to fix it:
results = [q(params) for q in qnodes]
Let me know if that helps!
I have tried the answers that have been given, the results failed. This is my code. I hope you can help me to solve this problem.
n_wires = 4
dev0 = qml.device("rigetti.qvm", device="4q-qvm")
dev1 = qml.device("qiskit.aer", wires=4)
devs = [dev0, dev1]
def circuit0(params, x=None):
for i in range(n_wires):
qml.RX(x[i % n_features], wires=i)
qml.Rot(*params[1, 0, i], wires=i)
qml.CZ(wires=[1, 0])
qml.CZ(wires=[1, 2])
qml.CZ(wires=[3, 0])
for i in range(n_wires):
qml.Rot(*params[1, 1, i], wires=i)
return qml.expval(qml.PauliZ(0)), qml.expval(qml.PauliZ(1))
def circuit1(params, x=None):
for i in range(n_wires):
qml.RX(x[i % n_features], wires=i)
qml.Rot(*params[0, 0, i], wires=i)
qml.CZ(wires=[0, 1])
qml.CZ(wires=[1, 2])
qml.CZ(wires=[1, 3])
for i in range(n_wires):
qml.Rot(*params[0, 1, i], wires=i)
return qml.expval(qml.PauliZ(0)), qml.expval(qml.PauliZ(1))
qnodes = [
qml.QNode(circuit0, dev0, interface="torch"),
qml.QNode(circuit1, dev1, interface="torch"),
]
import tensorflow as tf
n_classes = 2 #label
n_layers = 2
# the first index is for the two models
params = torch.tensor(np.random.random((2, n_layers, n_wires, 3)), requires_grad=True)
iteration = 1
def softmax_ensemble(params, x_point=None):
results = qnodes(params, x=x_point)
softmax = torch.nn.functional.softmax(results, dim=1)
choice = torch.where(softmax == torch.max(softmax))[0][0]
return softmax[choice]
def cost(params, y_point, x_point=None):
return torch.sum(torch.abs(softmax_ensemble(params, x_point=x_point) - y_point))
y_soft = torch.tensor(tf.one_hot(y_train, n_classes).numpy(), requires_grad=True)
opt = torch.optim.Adam([params], lr = 0.1) #learning rate
for x_point, y_point in zip(x_train, y_soft):
opt.zero_grad()
c = cost(params, y_point=y_point, x_point=x_point)
c.backward()
opt.step()
if iteration % 10 == 0 and iteration > 0:
print("Iteration : ", iteration)
iteration += 1
this is the output
Hey @nauvan,
It looks like you’re not using what I suggested in your code example you provided . Can you give that a try and see if that solves things?
I tried it and this is the result. This is the problem. I don’t know what the current problem is. I hope you can help me solve this.
It’s difficult to help without having the full code for me to run. If you could provide that, that would be a huge help!
That said, it looks like circuit
is a list. Printing things before the error happens is a good way to diagnose and debug your code. Here’s a good article on tips for debugging: How to Debug Your Code
my script code can be seen here. there is also the dataset used. please correct my code. it is very helpful to me
Hey @nauvan,
If you can narrow your issue down to something in PennyLane that you don’t have a great grasp on, I can certainly help you out . But we don’t have the resources to debug code from step 1. This is a great chance for you to do some debugging! It’s an essential skill that anyone who wants to program should have to a degree.
I tried but couldn’t, now the code is in error. I need to solve that for my course. What should I do? Because this is difficult for me.
Hey @nauvan!
We can’t do your homework for you. This is all a part of learning — sometimes it’s difficult! Being able to debug your code is an important skill to practice.