Why are variational layers made up of rotations and entanglers?

In the classical-quantum transfer learning code here, Is there any particular reason why the variational layer is defined as series of entangling and rotational gates?


Hey @Surya_Kiran_Vamsi,

Yes and no :slight_smile:

The main reason that almost all variational circuit architectures are made up from this building block is that this is what we have available on most hardware types. Loosely speaking, this kind of ansatz is called a “hardware-efficient” ansatz (which sometimes also includes more information on the device, i.e. which qubits are connected).

But other than convenience, it is hard to find systematic studies that investigate whether these gates make good building blocks in terms of trainability or learning performance…a huge blind spot in variational quantum computing at this stage.


thanks for the clarification @Maria_Schuld.