Quantum Machine Learning: Comparison
Please note this is a comparison between Version 1 by George A Papakostas and Version 2 by Jason Zhu.

Quantum computing has been proven to excel in factorization issues and unordered search problems due to its capability of quantum parallelism. This unique feature allows exponential speed-up in solving certain problems. However, this advantage does not apply universally, and challenges arise when combining classical and quantum computing to achieve acceleration in computation speed.

  • quantum machine learning
  • quantum classifier
  • quantum computer

1. Quantum Machine Learning—The Basic Concept

Quantum Machine Learning, classical Machine Learning, and Quantum Computing are interconnected, with the shared goal of developing more accurate and reliable models. Learning algorithms are designed to identify patterns in data for making predictions and decisions. The power of quantum computing lies in its qubits, which cannot be copied and have no ramifications or feedback loops [1][50]. Quantum computation is represented by quantum registers, gates, and circuits that consist of qubits and denote the chronological order and mode of action of the gates and registers [2][51]. A quantum register comprises a set of unordered qubits that simultaneously store all their states, with the elements numbered clockwise [3][52]. The SWAP gate is central to designing networks for the quantum computation of qubits, which carries out multiple qubit gates to create a base. The multi-qubit and CNOT gates are typical features of quantum computation [4][53], with the SWAP gate being crucial in the network design of Shor’s algorithm [5][6][54,55]. Recently, it has been suggested that generalizing quantum computation to higher-dimensional systems may offer advantages [7][56]. At the heart of Machine Learning is the extraction of information from data distributions without being explicitly programmed. Therefore, harnessing quantum phenomena is necessary [8][2], which is accomplished by developing quantum algorithms that implement classical algorithms using a quantum computer. In this way, data can be classified and analyzed by supervised and unsupervised learning methods using Quantum Neural Networks (QNNs) [9][10][57,58]. Variational Quantum Circuit (VQC) is a quantum gate circuit with free parameters that approximate, optimize, and classify various arithmetic tasks. The VQC-based algorithm is known as the Variational Quantum Algorithm (VQA), a classical quantum hybrid algorithm where parameter optimization typically occurs on classical computers [11][59]. The VQA approaches the target function using learning parameters with quantum characteristics, such as reversible linear gate operations and multi-layer structures that use layers of engagement.
While VQA continues to be a prominent approach for designing QNNs, it also inherits some of its drawbacks. For instance, the QNN framework currently faces the issue of the barren plateau, but specific solutions to this problem have yet to be proposed. Additionally, the measurement efficiency of quantum circuits has not been thoroughly investigated, which remains a challenge for QNN designers.

2. Quantum Machine Learning—Algorithms and Applications

Quantum machine learning algorithms are applied in the domains of Supervised Learning, Unsupervised Learning, and Reinforcement Learning (RL). In Supervised Learning, patterns are learned by observing training data, whereas in Unsupervised Learning, the structure is recognized from a set of grouped data. In RL, the algorithm learns from direct interaction with the environment [12][64]. Additionally, a technique called deep-supervised learning trains QNN to recognize patterns and images. It is a feed-forward network that employs circuits with qubits (based on neurons) and rotational gates (proportional to weights). On the other hand, Classical deep learning (CDL) uses complex multi-level neural networks, and a deep learning algorithm constructs multiple levels of abstraction from large datasets. Boltzmann machines (BM) are a well-known class of deep-learning networks where the nodes of graphs and connections are established by the Gibbs distribution [13][65]. The method aims to minimize the maximum distribution probability using the gradient descent method, which ensures consistency between the model and the training data [14][66].
Recently, Wiebe et al. [15][67] have developed two quantum algorithms, namely BM and QDL, which efficiently calculate the distribution of data. In BM, the state is initially approximated using the classical mean-field method before being fed into the quantum computer and applying sampling to calculate the required gradient. On the other hand, the second algorithm performs a similar process but requires access to the superposition training data (via QRAM) and provides more accurate solutions but not acceleration. This procedure is equivalent to an attribute map that assigns data to vectors in Hilbert space [16][17][68,69]. The inner products of such quantum states encode data and create kernels [18][35].
Fuzzy cognitive maps (FCMs) have been introduced as a quantum-inspired machine learning model belonging to the category of expert systems. Quantum Fuzzy Cognitive Maps (QFCM) were initially introduced in 2009, presenting a quantum-based approach to cognitive maps [8][19][2,70]. In this framework, each concept is represented by a single qubit, and the concept value is computed through qubit superposition. In 2015, the QFCM model was further developed as an ensemble classifier [20][71], which outperformed other models such as AdaBoost and Neural Networks, demonstrating increased robustness against noise. Additionally, in the same year, a variant of FCM called bipolar quantum FCMs was proposed [21][72]. In 2018, the authors of bipolar quantum FCM explored the application of the quantum cryptography problem [22][73], where bipolar quantum FCMs performed well in comparison with other methods. Although QFCMs may not strictly fall within the domain of QML, most implementations are inspired by quantum principles, even though explicit proposals for execution on quantum computers are not prevalent.
A well-defined example of QML is the QSVM algorithm, which utilizes a quantum processor to estimate the kernel directly in the quantum space. This method involves a training phase where the kernel is computed, and support vectors are determined. Subsequently, the unlabeled data is classified based on the solution obtained during the training phase [23][74]. The algorithm is capable of performing binary or multi-class classification based on the data classes and can even be utilized for data clustering and exploration.
Machine learning classifications are versatile tools with applications in diverse fields, such as computer vision, medical imaging, drug discovery, handwriting recognition, geostatistics, and more. Quantum computers hold the potential to help overcome the challenges of support vector machine (SVM) and kernel learning, as previous surveys have shown that quantum computation can exponentially accelerate SVM training. Quantum SVM and kernels can efficiently explore high-dimensional spaces, creating maps and decision boundaries for specialized datasets in line with their design objective;. this task is difficult for classical kernel functions to match [24][75]. This mapping of classical data to the Hilbert space is illustrated using the Bloch sphere.

SVM Kernels and Quantum SVM

The SVM algorithm can be implemented in two ways: using a kernel or using a quantum processor (QSVM). In cases where the data set is non-linear and cannot be handled by a linear classifier, the distance between each point and the center is calculated to create a new feature, which enables classification in a higher-dimensional space.
The kernel function maps an input feature space into a new, possibly higher-dimensional space where the training dataset can be better separated. For SVMs, the first step involves preparing the training dataset and mapping features into the range [−1, 1], followed by kernel optimization to minimize the cost function. The final step is to test the model. QSVMs follow a similar process, but instead of using a classical computer to evaluate the kernel function, qubits are used to encode the feature space, and the quantum computer performs the kernel evaluation. The attribute mapping is done by encoding data onto the quantum state of each qubit, allowing for the efficient calculation of the kernel matrix. 
Mapping is achieved with a single gate, and this leads to the creation of a quantum circuit. The kernel is computed by selecting a reference point x and encoding the remaining data points relative to it using quantum states. The resulting circuit is then reversed to return to the zero state with a wider width that depends on the distance between the x and z states, giving rise to the kernel value. The weights of the QSVM model are optimized by minimizing a cost function using classical optimization techniques, similar to the classical SVM algorithm [25][76].
The second approach involves utilizing the Variational Quantum Eigensolver (VQE), a hybrid quantum-classical computational method designed to define the eigenvalues of a Hamiltonian. However, there are two main limitations associated with this approach. Firstly, the complexity of the quantum circuits used in VQE can be challenging. Secondly, the classical optimization problem that relies on the variational ansatz [26][77] introduces additional complexity. The Quantum Variational Circuit (QVC), which is applied in this method, enables a weighted rotation of L (a hyperparameter of the variational circuit) times on the Bloch sphere. As the vectors are already encoded as linear angles in the sphere, this technique provides a detailed description and the ability to search for optimal weights θ. The results are output as a distribution of 0 s and 1 s, mapped to +1 and −1 [27][78].

3. Quantum Learning Methods

Currently, there is no comprehensive quantum learning theory, and the primary approach is to adapt classical machine learning algorithms or their costly subroutines to run on a quantum computer or within the framework of quantum computing theory. As a result, a hybrid solution has emerged, leaving unanswered questions about what a fully quantum learning procedure will look like. A significant challenge in quantum learning procedures is the presence of noise, which increases as the quantum circuits become more complex and the number of measurements increases. To mitigate this issue, a new method called noise learning has emerged, which has roots in classical machine learning theory. In the context of learning problems, noise plays diverse roles, and sometimes it can produce favorable outcomes. While noise in the inputs and gradients is typically undesirable in classical machine learning, it can be useful in quantum computing. Current quantum computers have few qubits, which limits the implementations of methodologies and applications that will give quantum machine learning the field to be evolved. With more qubits, the capacity of information encoding will increase. Another challenge is error correction; notable solutions have been proposed in recent years in order to solve it [28][79]. Quantum computers, without much error, will give the advantage of stabilizing the computational results and increasing the resources like available qubits. Furthermore, a simple quantum error model is used to simulate noisy quantum devices numerically, which involves a weighted combination of two types of errors and phase reversals. Many researchers have investigated how noise affects the ability to learn a function in the quantum setting. Bshouty and Jackson [29][80] demonstrated that Disjunctive Normal Form (DNF) types could effectively learn under a uniform distribution. Both the classical and quantum problems can be solved quickly without noise, but the existence of noise cannot be ignored.

4. Quantum Machine Learning—Advantages and Limitations

Quantum computers can perform the same tasks as classical computers but with the potential for much faster speeds due to the phenomenon of quantum parallelism. However, this requires the development of new quantum algorithms since classical algorithms may not be sufficient. For example, Shor’s algorithm uses quantum parallelism to efficiently factorize large numbers. In order to fully harness the power of quantum computing, it needs to be combined with other technologies such as machine learning, AI, big data, and cloud computing.
The benefits of quantum computing include potential improvements in computational speed, exponential acceleration in certain problems, and the ability to learn from fewer data with complex structures and handle noise. Additionally, quantum computing can increase correlation capacity and achieve results with less training information or simpler models. However, the high cost and sensitivity of the machines are significant disadvantages, as is the fact that they must be operated at extremely low temperatures. The results produced by quantum algorithms can be difficult for humans to comprehend and require experienced users. Furthermore, the possibility of quantum computers easily breaking encryption codes is a concern for internet security.
Video Production Service