Possible to create a QNN like classical one?

Hi @SuFong_Chien,

We do not have any dedicated features yet for saving/loading models. Instead, users can leverage the tools from TensorFlow directly.

Perhaps the following guide can help you with the task of loading saved/checkpointed models using Tensorflow.

Hi Tom

When I come back to this, it is still a classification problem. My output is with says [-0.358118 -0.0280287 -0.30103 -0.129338]. They are not discrete like example above. They are continuous real values. Not possible for doing that. I have also looked into the mentioned transfer learning, again it is a classification example. Any suggestions. Thanks.

When I come back to this, it is still a classification problem. My output is with says [-0.358118 -0.0280287 -0.30103 -0.129338]. They are not discrete like example above. They are continuous real values. Not possible for doing that. I have also looked into the mentioned transfer learning, again it is a classification example. Any suggestions. Thanks.

Hi @SuFong_Chien, we’re sorry for not getting back to you sooner! I missed this comment.

If I understand correctly, you would like to predict an output label for one of four classes? Given the vector [-0.358118 -0.0280287 -0.30103 -0.129338], this could be achieved by applying a softmax (e.g., using tf.nn.softmax) and finding a resulting probability vector. To find the prediction, you could then do tf.math.argmax or alternatively sample from the vector.

However, one thing that is not clear to me is that the values you have are negative. If I recall correctly, these values should be associated with probabilities of getting certain numbers of photons in each mode, so we expect them to be non-negative and sum to less than one. If that were the case, instead of a softmax, we could also just renormalize.

Hope this helps answer your question, and let us know if you have any more!

Thanks

Hi Tom

I was very busy recently to chase the KPI for this year ;). I need to predict the four outputs simultaneously, not of four classes as you mentioned. Do yo mean add one more layer after the last quantum layer with softmax? I am not so clear. But, the classification mentioned above can let me doing another work after this one. Thank you so much.

So sharp observation. It is right, all negative due to the values is transform under Log10. The problem of this work is we cannot normalize them because we need to feed all the predicted values into a function. The normalized values are invalid to the function.

Hi @SuFong_Chien,

@Tom_Bromley will be able to respond himself in a few weeks after the winter holiday, but perhaps I can quickly help out if possible.

To answer your specific question, yes, I do believe Tom was proposing to use softmax on your outputs to normalize them into a probability distribution. If one then wanted a “discrete” output, you could sample from the resulting distribution.

Perhaps tangentially, note that, even if your outputs are not proper probabilities, one can still use a function like this one to perform classification on arbitrary real-valued vectors (they are just considered “logits”). I think this notion of logit is similar to your values, since those are also transformed, as you say, using a logarithm.

But of course, it may make sense in your case to keep these feature vectors unnormalized if you know they will be fed to additional functions downstream.

1 Like