qml.QNSPSAOptimizer() giving AttributeError: 'function' object has no attribute 'construct'

from pennylane import numpy as np

dev = qml.device("default.qubit", wires=3)

@qml.qnode(dev)
def circuit(params, data):
    qml.AngleEmbedding(data, wires=[0, 1, 2])
    qml.StronglyEntanglingLayers(params, wires=[0, 1, 2])
    return qml.expval(qml.PauliZ(2))

data = np.random.random([3], requires_grad=False)
params = np.random.random(qml.StronglyEntanglingLayers.shape(3, 3), requires_grad=True)

def cost(params, single_sample):
     return (1 - circuit(params, single_sample)) ** 2

opt = qml.QNSPSAOptimizer()

for it in range(10):
    cost_fn = lambda p: cost(p, data)
    metric_fn = lambda p: qml.metric_tensor(circuit, approx="block-diag")(p, data)

    params, loss = opt.step_and_cost(cost_fn, params,  metric_tensor_fn=metric_fn)

    print(f"Epoch: {it} | Loss: {loss} |")

the code works with all other optimizers. but i need to use spsa and qpsa. it gives ---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[25], line 24
21 cost_fn = lambda p: cost(p, data)
22 metric_fn = lambda p: qml.metric_tensor(circuit, approx=“block-diag”)(p, data)
—> 24 params, loss = opt.step_and_cost(cost_fn, params, metric_tensor_fn=metric_fn)
26 print(f"Epoch: {it} | Loss: {loss} |")

File ~/.conda/envs/cent7/2020.11-py38/xyz/lib/python3.8/site-packages/pennylane/optimize/qnspsa.py:184, in QNSPSAOptimizer.step_and_cost(self, cost, *args, **kwargs)
171 def step_and_cost(self, cost, *args, **kwargs):
172 r""“Update trainable parameters with one step of the optimizer and return
173 the corresponding objective function value after the step.
174
(…)
182 function output prior to the step
183 “””
→ 184 params_next = self._step_core(cost, args, kwargs)
186 if not self.blocking:
187 loss_curr = cost(*args, **kwargs)

File ~/.conda/envs/cent7/2020.11-py38/xyz/lib/python3.8/site-packages/pennylane/optimize/qnspsa.py:212, in QNSPSAOptimizer._step_core(self, cost, args, kwargs)
209 all_tensor_dirs =
210 for _ in range(self.resamplings):
211 # grad_tapes contains 2 tapes for the gradient estimation
→ 212 grad_tapes, grad_dirs = self._get_spsa_grad_tapes(cost, args, kwargs)
213 # metric_tapes contains 4 tapes for tensor estimation
214 metric_tapes, tensor_dirs = self._get_tensor_tapes(cost, args, kwargs)

File ~/.conda/envs/cent7/2020.11-py38/xyz/lib/python3.8/site-packages/pennylane/optimize/qnspsa.py:355, in QNSPSAOptimizer._get_spsa_grad_tapes(self, cost, args, kwargs)
352 args_plus[index] = arg + self.finite_diff_step * direction
353 args_minus[index] = arg - self.finite_diff_step * direction
→ 355 cost.construct(args_plus, kwargs)
356 tape_plus = cost.tape.copy(copy_operations=True)
357 cost.construct(args_minus, kwargs)

AttributeError: ‘function’ object has no attribute ‘construct’. I used regularization also. But did not work. Can you please look into it.

and output of qml.about()

Name: PennyLane
Version: 0.28.0
Summary: PennyLane is a Python quantum machine learning library by Xanadu Inc.
Home-page: GitHub - PennyLaneAI/pennylane: PennyLane is a cross-platform Python library for differentiable programming of quantum computers. Train a quantum computer the same way as a neural network.
Author:
Author-email:
License: Apache License 2.0
Location: /home/bhatia87/.conda/envs/cent7/2020.11-py38/xyz/lib/python3.8/site-packages
Requires: appdirs, autograd, autoray, cachetools, networkx, numpy, pennylane-lightning, requests, retworkx, scipy, semantic-version, toml
Required-by: PennyLane-Lightning

Platform info: Linux-3.10.0-1160.108.1.el7.x86_64-x86_64-with-glibc2.10
Python version: 3.8.5
Numpy version: 1.21.0
Scipy version: 1.9.3
Installed devices:

  • default.gaussian (PennyLane-0.28.0)
  • default.mixed (PennyLane-0.28.0)
  • default.qubit (PennyLane-0.28.0)
  • default.qubit.autograd (PennyLane-0.28.0)
  • default.qubit.jax (PennyLane-0.28.0)
  • default.qubit.tf (PennyLane-0.28.0)
  • default.qubit.torch (PennyLane-0.28.0)
  • default.qutrit (PennyLane-0.28.0)
  • null.qubit (PennyLane-0.28.0)
  • lightning.qubit (PennyLane-Lightning-0.30.0)

Hi @Amandeep,

The qml.QNSPSAOptimizer only works with cost functions that are QNodes. This means that you can’t perform the postprocessing that you currently have.

On the other hand I see that you’re using a very old version of PennyLane. Are you able to use the current development version of PennyLane? We will be releasing v0.36 next week so you should use that version.

To use the development version you can run pip install git+https://github.com/PennyLaneAI/pennylane.git#egg=pennylane

Starting on Tuesday next week this will be the latest stable version so you can update your PennyLane version by running pip install pennylane --upgrade.

Once you have this version installed you can run the code below. Notice that I’ve changed the optimization loop so that it uses the circuit directly instead of the cost function you had before.

import pennylane as qml
from pennylane import numpy as pnp

dev = qml.device("default.qubit", wires=3)

@qml.qnode(dev)
def circuit(params, data):
    qml.AngleEmbedding(data, wires=[0, 1, 2])
    qml.StronglyEntanglingLayers(params, wires=[0, 1, 2])
    return qml.expval(qml.PauliZ(2))

data = pnp.random.random([3], requires_grad=False)
params = pnp.random.random(qml.StronglyEntanglingLayers.shape(3, 3), requires_grad=True)

opt = qml.QNSPSAOptimizer()

for it in range(10):
    [params, data], loss = opt.step_and_cost(circuit, params, data) # modified

    print(f"Epoch: {it} | Loss: {loss} |")

I hope this helps you!

@CatalinaAlbornoz Thank you for your response. I have upgraded to 0.32.0. The other concern is when i use amplitude instead of angle embedding either with QNGoptimizer and default.qubit or lightning. The execution remains slow.

Hi @Amandeep,

Is there a reason why you prefer not to use the current development version of PennyLane? Do you get any errors while using it? I also noticed that you’re using Python 3.8. I would recommend that you use Python 3.10 if possible. We stopped supporting Python 3.8 since v0.32 and we will stop supporting Python 3.9 in the near future.

Do you have some numbers for how much slower amplitude embedding is for you?

@CatalinaAlbornoz I have upgraded pennylane. In above QNSPS code. I used but when i want to use batch size it gives value error of shape between params and data. When i set batch=1. Then it starts working.

Hi @Amandeep,

Can you please show me where/how you added the batch size? Can you please share the minimal (but self-contained) version of your code that shows this?

@CatalinaAlbornoz Thank you for your response.

import pennylane as qml

n_qubits = 5
dev = qml.device("default.qubit", wires=n_qubits)

@qml.qnode(dev)
def circuit( weights,inputs):

    qml.AmplitudeEmbedding(inputs, wires=range(n_qubits), normalize=True)
  
    qml.StronglyEntanglingLayers(weights=weights, wires=range(5))
    
    return qml.expval(qml.PauliZ(0))

from pennylane import numpy as np

num_layers=6

params = np.random.random( size=(num_layers,n_qubits,3), requires_grad=True)

params.shape

def opt_fn(initial_learning_rate):
    return qml.QNSPSAOptimizer( regularization=0.001)

initial_learning_rate = 0.01

for round in range(10):
     
        opt =opt_fn(initial_learning_rate)
        batch_size =1
        n_batches = len(X_train) // batch_size
        
        for batch_idx in range(n_batches):
            X_batch = X_train[batch_idx * batch_size : (batch_idx + 1) * batch_size]
            y_batch = y_train[batch_idx * batch_size : (batch_idx + 1) * batch_size]
 
            # Perform optimization step
            [params, X_batch ], loss_val = opt.step_and_cost(circuit, params, X_batch )
          
            # Store the loss for the current batch
            loss.append(loss_val)

when i set batch size=1, then only works. Else if i set batchsize=16, then it gives a value error ValueError: operands could not be broadcast together with shapes (16,) (6,5,3). I want to work with different batch size and classical SPSA as well QSPSA optiizer for same code.

Hi @Amandeep, I get NameError: name 'X_train' is not defined.

Could you please share a self-contained version of your code?

@CatalinaAlbornoz Thank you for your response. Please find the attached code.

# i took random sample of 100 records and 32 features (amplitude encoding #5 qubits)
num_samples = 100
num_features = 32
random_array = np.random.rand(num_samples, num_features)

# Generate random labels for the samples
labels = np.random.randint(2, size=num_samples)  # Assuming binary labels

# Divide the data into train and test sets with an 80:20 ratio
X_train, X_test, y_train, y_test = train_test_split(random_array, labels, test_size=0.2, random_state=42)

y_train=y_train*2-1
y_test=y_test*2-1    

print("Shape of training data:", X_train.shape)
print("Shape of test data:", X_test.shape)
print("Shape of training labels:", y_train.shape)
print("Shape of test labels:", y_test.shape)

I need to work on QPSA optimizer and classical SPSA optimizer. Although the same code works for QNGD, GD optimizer. Can you please check. I really need this to work. Thank you once again.

Hi @Amandeep,

I think the issue here is using AmplitudeEmbedding. There doesn’t seem to be a way of decomposing this. You could try manually creating the embedding with rotations and other operators that can then be decomposed. Unfortunately I don’t see any feasible way of using the AmplitudeEmbedding template together with QPSA.

@CatalinaAlbornoz Thank you for your response. I did it with AngleEmbedding too. But did not work.

Hi @Amandeep,

Have you tried creating an embedding directly with rotation gates? So instead of using a template adding the rotations one by one. Ideally you would need less inputs than qubits but here’s an example (which may or may not work) of encoding the inputs into several layers if needed.

import pennylane as qml
from pennylane import numpy as pnp

n_qubits = 3

dev = qml.device("default.qubit", wires=n_qubits)

@qml.qnode(dev)
def circuit(weights, inputs):
    # Iterate over all inputs
    for i in range(len(inputs)):
      # Iterate over all qubits
      for n in range(n_qubits):
        # Encode all inputs one by one on all qubits. 
        # If there are more inputs than qubits they will be encoded in successive layers
        if i%n_qubits==n:
          qml.RX(inputs[i],wires=n)
    qml.StronglyEntanglingLayers(weights, wires = range(n_qubits))
    return qml.expval(qml.PauliZ(0))

shape = qml.StronglyEntanglingLayers.shape(n_layers=2, n_wires=n_qubits)
weights = pnp.random.random(size=shape, requires_grad=True)
inputs = pnp.random.random(5, requires_grad=False)
qml.draw_mpl(circuit,decimals=1)(weights,inputs)

@CatalinaAlbornoz Thank you for your response. I did encoded as you are saying. But it did not work. Although, the recent query QNSPSA error using a dataset
is exactly the same I am doing. I am not sure how they solved the problem of QNSPSA optimizer. Hope it can help at your end.

Hi @Amandeep ,

Your error seems very different to the one in the thread that you shared. You can try running the code shared by Isaac but note that it’s a different workflow from what you have been working with so far.

Of course, if this is what you needed then that’s great!

@CatalinaAlbornoz Error was different because of different version of pennylane. When I installed latest, the error is exactly same as in their thread. The code of @isaacdevlugt works, but when i try to implement batch wise. It starts giving errors. :frowning: Initially in that thread @Christophe_Pere is trying to implement same thing and getting exactly same error. I traced the thread. But later I don’t know how the things work with QNSPA and using batches of data.

Hi,

In fact, as we discussed in my post with @isaacdevlugt, the QN-SPSA optimizer and the QNG optimizer don’t accept batches of data. you have to loop on all your data points in your batch.

Like this:

for data in range(num_batches):
    for points in data: 
        params, cost = opt.step_and_cost(qnode, params, X=point)
        params = np.array(params, requires_grad=True)

Where params are the parameters for your model and X is the parameter of the qnode. I used the qnode as a function to change the number of qubits during the learning process.

best,

C.

1 Like