key: cord-0174506-utru478k authors: Herman, Dylan; Googin, Cody; Liu, Xiaoyuan; Galda, Alexey; Safro, Ilya; Sun, Yue; Pistoia, Marco; Alexeev, Yuri title: A Survey of Quantum Computing for Finance date: 2022-01-08 journal: nan DOI: nan sha: 564e645bda2853eecf36b024f4c2fe2483f72f5a doc_id: 174506 cord_uid: utru478k Quantum computers are expected to surpass the computational capabilities of classical computers during this decade and have transformative impact on numerous industry sectors, particularly finance. In fact, finance is estimated to be the first industry sector to benefit from quantum computing, not only in the medium and long terms, but even in the short term. This survey paper presents a comprehensive summary of the state of the art of quantum computing for financial applications, with particular emphasis on Monte Carlo integration, optimization, and machine learning, showing how these solutions, adapted to work on a quantum computer, can help solve more efficiently and accurately problems such as derivative pricing, risk analysis, portfolio optimization, natural language processing, and fraud detection. We also discuss the feasibility of these algorithms on near-term quantum computers with various hardware implementations and demonstrate how they relate to a wide range of use cases in finance. We hope this article will not only serve as a reference for academic researchers and industry practitioners but also inspire new ideas for future research. Quantum computation relies on a fundamentally different means of processing and storing information than today's classical computers. The reason is that the information does not obey the laws of classical mechanics but those of quantum mechanics. Usually, quantum-mechanical effects become apparent only at very small scales, when quantum systems are properly isolated from the surrounding environments. These conditions, however, make the realization of a quantum computer a challenging task. Even so, according to a McKinsey & Co. report [236] , finance is estimated to be the first industry sector to benefit from quantum computing (Section 2), largely because of the potential for many financial use cases to be formulated as problems that can be solved by quantum algorithms suitable for near-term quantum computers. This is important because current quantum computers are small-scale and noisy, yet the hope is that we can still find use cases for them. In addition, a variety of quantum algorithms will be more applicable when large-scale, robust quantum computers become available, which will significantly speed up computations used in finance. The diversity in the potential hardware platforms or physical realizations of quantum computation is completely unlike any of the classical computational devices available today. There exist proposals based on vastly different physical systems: superconductors, trapped ions, neutral atoms, photons, and others (Section 3.3), yet no clear winner has emerged. A large number of companies are competing to be the first to develop a quantum computer capable of running applications usable in a production environment. In theory, any computation that runs on a quantum computer may also be executed on a classical computer-the benefit that quantum computing may bring is the potential reduction in time and memory space with which the computational tasks are performed [256] , which in turn may lead to unprecedented scalability and accuracy of the computations. In addition, any classical algorithm can be modified (in a potentially nontrivial way) such that it can be executed on a quantum computer, but without any speedup. Obtaining quantum speedup requires developing new algorithms that specifically take advantage of quantum-mechanical properties. Thus, classical and quantum computers will need to work together. Moreover, in order to solve a real-world problem, a classical computer should be able to efficiently obtain the necessary data from a quantum computer. This promised efficiency of quantum computers enables certain computations that are otherwise infeasible for current classical computers to complete in any reasonable amount of time (Section 3). In general, however, the speedup for each task can vary greatly or may even be currently unknown (Section 4). While these speedups, if found, can have a tremendous impact in practice, they are typically difficult to obtain. And even if they are discovered, the underlying quantum hardware must be powerful enough to minimize errors or an overhead that counteracts the algorithmic speedup. We do not know exactly when such robust hardware will exist. Thus, the goal of quantum computing research is to develop quantum algorithms (Section 4) that solve useful problems faster and to build robust hardware platforms to run them on. The industry needs to understand the problems that can best benefit from quantum computing and the extent of these benefits, in order to make full use of its revolutionary power when production-grade quantum devices are available. To this end, we offer a comprehensive overview of the applicability of quantum computing to finance. Additionally, we provide insight into the nature of quantum computation, focusing particularly on specific financial problems for which quantum computing can provide potential speedups compared with classical computing. The sections in this article can be grouped into two parts. The first part contains introductory material: Section 2 introduces computationally intensive financial problems in the areas of numerical risk modeling (Section 2.2.1), optimization (Section 2.2.2), and machine learning (Section 2.2.3) that potentially benefit from quantum computing, whereas Sections 3 and 4 introduce the core concepts of quantum computation and quantum algorithms. The second part-the main focus of the article-reviews research performed by the scientific community to develop quantum-enhanced versions of three major problem-solving techniques used in finance: Monte Carlo integration (Section 5), optimization (Section 6), and machine learning (Section 7). The connections between financial problems and these problem-solving techniques are outlined in Table 1 . Finally, Section 8 covers experiments on current quantum hardware that have been performed by researchers to solve financial use cases. Numerous financial use cases require the ability to assess a wide range of potential outcomes. To do this extent, banks employ algorithms and models that calculate statistical probabilities. Such techniques are fairly effective, but not infallible. In a world where huge amounts of data are generated daily, computers that can compute probabilities accurately are becoming a predominant need. For this reason, several banks are turning to quantum computing given its promise to analyze vast amounts of data and compute results faster and more accurately than what any classical computer has ever been able to do. Financial institutions capable of leveraging quantum computing are likely to see important benefits in the future, particularly when it comes to analyzing large unstructured data sets efficiently and accurately. This section introduces the financial use cases that are projected to benefit from quantum computing in the next years. The financial services industry is expected to reach a market cap of $28.53 trillion by 2025, with a compound annual growth rate of 6% [279] . North America contributes 27% to this market and is second only to Europe. While this industry is often considered incredibly stable and insulated from change, shifting government regulations, new technologies, and customer expectations present challenges that require the industry to adapt. We discuss here three key macroeconomic financial trends that can be impacted by quantum computing: keeping up with regulations, addressing customer expectations and big data requirements, and ensuring data security. The Third Basel Accord, commonly known as Basel III, is an international regulatory framework set up in response to the 2008 financial crisis. Basel III sets requirements on capital, risk coverage, leverage, risk management and supervision, market discipline, liquidity, and supervisory monitoring [260] . These regulations require computing numerous risk metrics, such as value at risk (VaR) and conditional value at risk (CVaR). Because of the stochastic nature of the parameters involved in these computations, closed-form analytical solutions are generally not available, and hence numerical methods would have to be employed. One of the most widely used numerical methods is Monte Carlo integration, since it is generic and scalable to high-dimensional problems [143] . Quantum Monte Carlo integration (Section 5) has been shown to offer up to a quadratic speedup over the performance of its classical counterpart. Engaging in ways to determine risk metrics with higher levels of accuracy and efficiency will enable financial institutions to make better-informed loan-related decisions and financial projections. Customer personalization has become increasingly important in finance. For example, financial technology (Fin-Tech), in contrast to traditional finance, thrives on providing personalized services. A Goldman Sachs report has highlighted the necessity for financial institutions to provide new big-data-driven services to align with changing consumer behavior [319] . According to a Salesforce 2020 publication [293] , 52% of customers expect companies to always personalize offers, while only 27% of customers feel that the financial services industry provides great service and support [293] . Financial institutions must meet this expectation in order to continue to retain their customers. The current classical techniques for analyzing data have large computational demands and require significant time to train algorithms, whereas quantum algorithms may be able to speed up these computationally intensive and time-consuming components. As financial institutions continue to generate data, they must be able to employ that data in a functional way to improve their business strategy. Additionally, data organization can allow banks to engage with customers' finances more specifically and effectively, supporting their customer service and keeping customers engaged despite other options such as FinTech. Much of this data analysis is done by dealing with large matrices, which can be computationally demanding for classical computers. This is one of the most pressing concerns for the financial services industry, according to an article by McKinsey [28] . An increase in digitalization has occurred due to the rise of FinTech. Additionally, as a result of the COVID-19 pandemic, substantially more financial activity is performed online. According to the McKinsey article, 95% of board committees discuss cyber and tech risks four or more times a year. Banks must stay up to date with online security and remain vigilant against attackers. Quantum computing data modeling and machine learning techniques could allow banks to properly assess data security and perform classification with higher accuracy. Furthermore, even if one does not plan to use quantum computation, one must be knowledgeable about it because of its ability to break current public-key cryptographic standards [136, 256, 306] , such as RSA and Diffie-Hellmann, including the common variants based on elliptic curves [281] . This is due to quantum computing's ability to efficiently solve the Abelian hidden subgroup problem [85] . Although current quantum computers cannot achieve this task, financial institutions need to become ready by investigating how to utilize quantum-safe protocols [42] and, potentially, quantum-enhanced cryptography [140] , each of which has its own potential usages in the layers of a network. While the situation is by no means as serious for cryptographically secure hash functions or symmetric-key ciphers, it is important to be aware of the potential quantum-enhanced attacks [21, 172, 180] . Here, however, we are going to focus on quantum algorithms that have direct applications in finance, so protocols for quantum-enhanced cryptographic attacks, quantum-safe cryptography, and quantum-enhanced cryptography are all out of the scope of this article. Common financial problems that can benefit from quantum computing are outlined below. As discussed earlier, finance is one of the first industry sectors expected to benefit from quantum computing, largely because financial problems lend themselves to be solved on near-term quantum computers. Since many of the financial problems are based on stochastic variables as inputs, they are, more often than not, relatively more tolerant to imprecisions in the solutions compared with problems in chemistry and physics. In Table 1 we present an overview of some problem categories, along with some example financial use cases, that potentially can benefit from the quantum algorithms discussed in this survey. The remainder of this section discusses these problems in more detail. Numerical techniques are often used to solve many problems in finance. One such problem is pricing. The pricing of a financial asset is an important, yet challenging, task for finance. Assets have been priced classically in a variety of ways, among which some of the most popular ones are Monte Carlo methods [336] and various machine learning techniques [76, 154] . General asset pricing with quantum machine learning (QML) is discussed in Section 7.8.3. A particularly important class of assets is derivatives. A derivative is a contract that derives its value from another source (e.g., collection of assets, financial benchmarks) called the underlying. The value of a derivative is typically calculated by simulating the dynamics of the underlying and computing the payoff accordingly. Specifically, an option gives the holder the right, but not the obligation, to purchase or sell an asset at a specified price (strike price) and within a given time frame (exercise window) in the future. One of the most well-known models for options is the Black-Scholes model, which has a closed-form solution for the simple European option [50] . This model assumes that the price of the underlying asset evolves like a geometric Brownian motion [202] with an associated volatility. However, most pricing tasks involve derivatives that are path dependent, and hence information about the asset at multiple time steps is required. These problems do not admit closed-form solutions. Thus, derivative pricing is typically performed with Monte Carlo methods [143, 336] . Because of the large number of paths that the underlying asset can possibly take, these methods often require large amounts of computational power to achieve the desired precision in the solution. Therefore, derivative pricing is another key problem in finance for which a quantum approach is appealing. In Section 5.1.1 we will explore the application of quantum algorithms to Monte Carlo integration for pricing derivatives. Risk analysis is another crucial problem for financial institutions, since risk metrics can determine the probability and amount of loss under various financial scenarios. Risk analysis is also important in terms of compliance with regulations. Because of the Basel III regulations mentioned earlier, financial institutions need to calculate and meet requirements for their VaR, CVaR, and other metrics. However, the current classical approach of Monte Carlo methods [143] is computationally intensive. Similar to the pricing problem mentioned above, a quantum approach could also benefit this financial problem. In Section 5.1.2 we will explore the application for quantum algorithms to the Monte Carlo approach of computing risk metrics such as VaR and CVaR. Optimization problems are ubiquitous in almost all quantitative disciplines, from computer science and engineering to economics and finance. In finance, one of the most commonly seen optimization problem is portfolio optimization. Portfolio optimization is the process of selecting the best set of assets and their quantities, from a pool of assets being considered, according to some predefined objective. The objective can vary depending on the investor's preference regarding financial risk and expected return. Modern portfolio theory [231] focuses on the trade-offs between risk and return to produce what is known as an efficient portfolio, which maximizes the expected return given a certain amount of risk. This trade-off relationship is represented by a curve known as the efficient frontier. The expected return and risk of a portfolio can often be modeled respectively by looking at the mean and variance of static portfolio returns. The problem setup for portfolio optimization can be formulated as constrained utility maximization [231] . In portfolio optimization [108] , each asset class, such as stocks, bonds, futures, and options, is assigned a weight. Additionally, all assets within the class are allocated in the portfolio depending on their respective risks, returns, time to maturity, and liquidity. The variables controlling how much the portfolio should invest in an asset can be continuous or discrete. However, the overall problem can contain variables of both types. Continuous variables are suitable for representing the proportion of the portfolio invested in the asset positions when nondiscrete allocations are allowed. Discrete variables are used in situations where assets can be purchased or sold only in fixed amounts. The signs of these variables can also be used to indicate long or short positions. When risk is accounted for, the problem is typically a quadratic program, usually with constraints. Depending on the formulation, this problem can be solved with techniques for convex [231] or mixed-integer programming [37, 230] . Speeding up the execution of and improving the quality of portfolio optimization is particularly important when the number of variables is large. This subject is explored in depth in Section 6.4.1. Optimization may also be used for feature selection in statistical or machine learning models. One use case in finance that could potentially benefit from this type of optimization is credit scoring. Credit scoring employs statistical analysis to determine the creditworthiness of an individual. This process is run by lenders and financial institutions. Credit score is influenced by payment history, accounts owed, length of credit history, new credit, and credit mix. This information can be gathered from historical business data, public filings, and payment collections. Currently, credit scores are often predicted by using classical regression models or machine learning algorithms [104, 333] . In order to get more accurate results, it is helpful for the machine learning algorithms to train on more data; however, this process is lengthy. Using optimization methods, quantum approaches can find critical features to determine a person's creditworthiness. Credit scoring methods using quantum algorithms for feature selection will be discussed in Section 6.4.4. Other financial use cases that rely on optimization techniques include hedging and swap netting (Section 6.4.2), identification of arbitrage opportunities (Section 6.4.3), and financial crash prediction (Section 6.4.5). These problems will be addressed as specific use cases in Section 6. Machine learning has become an essential aspect in modern business and research across various industries. It has revolutionized data processing and decision-making, empowering organizations large and small with unprecedented ability to leverage the ever-growing amount of easily accessible information. In finance, one of the most common use cases for machine learning is anomaly detection and, in particular, fraud detection. Fraud detection plays a critical role for financial institutions. This process allows banks to detect fraudulent transactions that can cause a loss in assets or reputational damage to customers. Currently, many banks use machine learning to identify fraudulent transactions; however, the classical approaches often return 80% false positives [347] . This high percentage makes ensuring data security and validity a challenging task. Using quantum machine learning techniques can potentially speed up the training of the algorithm or certain parts of the whole process. We will explore these techniques in Section 7. In particular, anomaly detection will be discussed in Section 7.8.1. Other applications of quantum machine learning in finance will also be discussed in the same section, including natural language modeling (Section 7.8.2), asset pricing (Section 7.8.3), and implied volatility calculation (Section 7.8.4). Quantum computing is an emerging and promising field of science. Global venture capital funds, public and private companies, and governments have invested millions of dollars in this technology. Quantum computing is driven by the need to solve computationally hard problems efficiently and accurately. For decades, the advancement of classical computing power has been remarkably following the well-known Moore's law. Initially introduced by Gordon Moore in 1965, Moore's law forecasts that the number of transistors in a classical computer chip will double every two years, resulting in lower prices for manufacturers and consumers. However, transistors are already at the size at which quantum mechanical effects become apparent and problematic [207] . These effects will only become more difficult to manage as classical chips decrease in size. Thus, it is imperative to investigate quantum computing, which harnesses these effects to perform classically infeasible computations. Google has demonstrated an enormous computational speedup using quantum computing for the task of simulating the output of pseudo-random quantum circuits [26] . Specifically, Google claimed that what takes its quantum device, Sycamore, 200 seconds to accomplish would take Summit, one of the most powerful supercomputers, approximately 10,000 years. Thus, quantum supremacy has been achieved. As defined by John Preskill [272] , quantum supremacy is the promise that certain computational (not necessarily useful) tasks can be executed exponentially faster on a quantum processor than on a classical one. The decisive demonstration of quantum devices to solve more useful problems faster and/or more accurately than classical computers, such as large-scale financial problems (Section 2.2), is called quantum advantage. Quantum advantage is more elusive and, arguably, has not been demonstrated yet. The main challenge is that existing quantum hardware does not yet seem capable of running algorithms on large enough problem instances. Current quantum hardware (Section 3.3) is in the noisy intermediate-scale quantum (NISQ) technology era [273] , meaning that the current quantum devices are underpowered and suffer from multiple issues. The fault-tolerant era refers to a, yet unknown, period in the future in which large-scale quantum devices that are robust against errors will be present. In the next two subsections we briefly discuss quantum information (Section 3.1) and models for quantum computation (Section 3.2). In the last subsection (Section 3.3) we give an overview of the current state of quantum hardware and the challenges that must be overcome. All computing systems rely on the fundamental ability to store and manipulate information. Today's classical computers operate on individual bits, which store information as binary 0 or 1 states. In contrast, quantum computers use the physical laws of quantum mechanics to manipulate information. At this fundamental level, the unit of information is represented by a quantum bit, or qubit. Physically, a qubit is any two-level quantum system [256, 298] . Mathematically, the state space of a single qubit can be associated with the complex projective line, denoted CP 1 [326] . 1 However, it is common to consider qubit states as elements ψ, called state vectors, of a two-dimensional complex-vector space but restrict to those that satisfy ψ 2 = 1 and allow for ψ and e iθ ψ to be used interchangeably (i.e., consider specific elements of the equivalence classes in CP 1 ). A state vector is usually denoted by using Dirac's "bra-ket" notation: ψ is represented by the "ket" |ψ [256, 298] . Examples of two single-qubit kets are the states |0 and |1 , which are analogous to the classical bits 0 and 1. Multi-qubit state spaces are expressed through the use of the vector space tensor product [256, 283] , which results in a 2 n -dimensional complex vector space for n qubits. The tensor product of two state vectors is denoted as |ψ1 ⊗ |ψ2 , |ψ1 |ψ2 , or |ψ1ψ2 , and extends naturally to multiple qubits. Entanglement is a quantum phenomenon that can be present in multi-qubit systems in which the states of the subsystems cannot be described independently. This results in correlated observations (measurement results) even when physical communication between qubits, which would correlate them in a classical sense, is impossible [256] . Entangled states are mathematically expressed as non-simple tensors. A quantum state is a product state of certain subsystems if those subsystems can be described independently and it is a simple tensor with respect to those subsystems. Qubits are in a superposition state with respect to a given basis for the state space if their state vector can be expressed as a nontrivial linear combination of basis states. Thus a qubit can be in a state that is an arbitrary superposition of |0 and |1 , whereas a classical bit can only be in one of the analogous states 0 or 1. The coefficients of the superposition are complex numbers called amplitudes. While multiplying a state by a phase e iθ , called a global phase, has no physical significance, the relative phase between two basis states in a superposition does. A measurement in quantum mechanics consists of probing a system to obtain a numerical result. Measurement of a quantum system is probabilistic. A projective measurement of a system with respect to a Hermitian operator A, called an observable, results in the state vector of the system being orthogonally projected onto an eigenspace, with orthogonal projector Π λ , and the observable quantity is the associated eigenvalue, λ. A potential projective measurement result λ is observed with probability equal to Π λ |ψ 2 2 . The expected value of the measurement is equal to ψ|A|ψ , where ψ|, called a "bra," is the Hermitian adjoint. In physics, the quantum Hamiltonian is the observable for a system's energy. A ground state of a quantum Hamiltonian is a state vector in an eigenspace associated with the smallest eigenvalue and thus has the lowest energy. Transformations between states are restricted to unitary operators and (non-unitary) measurements. The dynamics of a closed quantum system follow the Schrödinger equation, i |ψ(t) = H |ψ(t) , where is the reduced Planck constant and H is the system's quantum Hamiltonian [256, 298] . The unitary operator e −iH∆/ , which transforms a solution |ψ(t) to the Schrödinger equation at one point in time t into another one at a different point in time t + ∆, is called the time-evolution operator or propagator. There can also be a time-dependent Hamiltonian, H(t), where the operator H changes over time. However, the propagator is computed using the product integral [114] to take into account that H(ti) and H(tj) may not commute for all ti, tj in the specified evolution time interval [298] . The simulation of time evolution on a quantum computer is called Hamiltonian simulation. For quantum algorithms, the Hamiltonian can be an arbitrary observable with no connection to a physical system, and := 1 is often assumed. Three important observables on a single qubit are the Pauli operators: , where |ψ ψ| denotes the vector outer product. 2 For n qubits, the set {|k | k ∈ {0, 1} n } forms the computational basis. Measuring in the computational basis probabilistically projects the quantum state onto a computational basis state. A measurement in the context of quantum computation, without further clarification, usually refers to a measurement in the computational basis. A positive semi-definite operator ρ on the quantum state space with Tr(ρ) = 1 is called a density operator, where Tr(·) is the trace operator. The classical probabilistic mixture of pure states |ψi , in other words, state vectors, with probabilities pi has density operator ρ := i pi |ψi ψi| and is called a mixed state. In general, a density operator ρ represents a mixed state if Tr(ρ 2 ) < 1 and a pure state if Tr(ρ 2 ) = 1. Fidelity is one of the distance metrics for quantum states ρ and σ, defined as F (ρ, σ) = Tr( ρ 1/2 σρ 1/2 ) [256] . The fidelity of an operation refers to the computed fidelity between the quantum state that resulted after the operation was performed and the expected state. Decoherence is the loss of information about a quantum state to its environment, due to interactions with it. Decoherence constitutes an important source of error for near-term quantum computers. For further details, the works of Nielsen and Chuang [256] and Kitaev et al. [201] are the standard references on the subject of quantum computation. The former covers a variety of topics from quantum information theory, while the latter focuses on quantum computational complexity theory. A qubit-based 3 model of quantum computation is, simply, a methodology for performing computations on qubits. A model can realize universal quantum computation if it can be used to approximate any n-qubit unitary, for all n, with arbitrarily small error [256, 332] . Two universal models are polynomial-time equivalent if they can simulate each other with only polynomially bounded overhead in computational resources [5, 332] . We briefly discuss two polynomial-time-equivalent models that can be used to realize universal quantum computation: gate-based quantum computing and adiabatic quantum computing. There are others based on qubits that are not mentioned [332] . For physical implementation, one must detect and correct errors that can occur during computation. Techniques for quantum error correction [215] or fault tolerance have been developed to construct logical qubits and operations that tolerate errors. This strategy has been extensively studied for gate-based quantum computing [132] . Gate-based quantum computing, also known as the quantum circuit model, composes a countable sequence of unitary operators, called gates, into a circuit to manipulate qubits that start in a fixed state [201, 256] . A circuit can be viewed as a directed acyclic graph tracking the operations applied, which qubits they operate on, and the time step at which they are applied. Typically, the circuit ends with measuring in the computational basis. However, measurements of subsets of qubits before the end of the circuit and operations conditioned on the results of measurements are also possible [275] . Sequences of gates that act on separate sets of qubits can, if hardware supports it, be applied in parallel. The circuit depth is the longest sequence of gates in the circuit, from initial state until measurement, that cannot be made shorter through further parallelization. The quantum circuit model is analogous to the Boolean circuit model of classical computation [201] . An important difference with quantum computation is that because of unitarity all gates are invertible. A shot is a single execution of a quantum circuit followed by measurements. Multiple shots are required if the goal is to obtain statistics. This model can be utilized to realize universal quantum computation by considering circuits with n qubits built using any type of single-qubit gate and only one type of two-qubit gate [256] . Any n-qubit unitary can be approximated in this manner to arbitrarily small error. The two-qubit gates can be used to mediate entanglement. A discrete set of gates that can be used for universal quantum computation is called a basis gate set. The basis gate set realized by a quantum device is called its native gate set. The native gate set typically consists of a finite number of (parameterized and non-parameterized) single-qubit gates and one two-qubit entangling operation. An arbitrary single-qubit gate can be approximately decomposed into the native single-qubit gates. Some common single-qubit gates are the Hadamard gate (H), the Phase (S) and π 8 -Phase (T) gates, the Pauli gates (X, Y, Z), and the rotation gates generated by Pauli operators (R X (θ), R Y (θ), R Z (θ)). Examples of two-qubit gates are the controlled NOT gate denoted CX or CNOT and the SWAP gate. The H, S, and CX gates generate, under the operation of composition, the Clifford group, which is the normalizer of the group of Pauli gates [256] . The time complexity of an algorithm is measured in terms of the depth of the circuit implementing it. This is in terms of native gates when running the circuit on hardware. The majority of commercially available quantum computers implement the quantum circuit model. Adiabatic quantum computing (AQC) is a model for quantum computation that relies on the quantum adiabatic theorem. This theorem states that as long as the evolution of a time-dependent Hamiltonian, H(t), is sufficiently slow (defined in references) from an initial nth eigenstate of the time-dependent Hamiltonian at t = 0, it will remain in an instantaneous nth eigenstate throughout the evolution for all t [8] . A unitary operation can be applied by following an adiabatic trajectory from an initial Hamiltonian Hi whose ground state is easy to prepare (e.g., a product state of qubits) to the final Hamiltonian H f whose ground state is the result that would be obtained after applying the unitary. A computation consists of evolving a time-dependent Hamiltonian H(s) for time t f according to a schedule s(t) : [0, t f ] → [0, 1], which interpolates between the two (non-commuting) Hamiltonians: H(s) = (1 − s)Hi + sH f [8] . The time required for an adiabatic algorithm to produce the result to some specified error is related to the minimum spectral gap among all instantaneous Hamiltonians. The spectral gap of the Hamiltonian at time t is the difference between the nth eigenvalue and the next closest one (e.g., difference between the ground-state energy and the first excited level). This quantity helps define how slow the evolution needs to be in order to remain on the adiabatic trajectory-although it is notoriously difficult to bound in practice [8] . One example of where this has been done is for the adiabatic version of Grover's algorithm (Section 4.1) [282] . There exist classes of time-dependent Hamiltonians that through adiabatic evolution can approximate arbitrary unitary operators applied to qubits [8, 47] . Thus AQC can realize universal quantum computation. As of this writing, there does not exist a commercially available device for universal AQC. However, by polynomial equivalence to the circuit model, gate-based devices can efficiently simulate it. There are commercially available devices that implement a specific AQC algorithm for combinatorial optimization, called quantum annealing, discussed in Section 4.8. It is believed to be unlikely that the transverse-field Ising (stoquastic [8] ) Hamiltonian used for quantum annealing is universal [47] . As mentioned, the quantum computers available today are called NISQ devices. Multiple physical realizations of qubit-based quantum computation have been developed over the past decade. The three leading technologies are superconductors [206] , trapped atomic ions [65] , and neutral atoms [290] . The majority of efforts to develop quantum technologies at this time are driven by industry and, to a lesser degree, by academia. Superconducting quantum computers are manufactured by IBM, Google, D-Wave, and Rigetti among others. Trapped-ion quantum computers have been developed, for example, by IonQ, Quantinuum, and AQT. Other promising technologies also are under development, including photonic systems by PsiQuantum and Xanadu, neutral-atoms by ColdQuanta, QuEra Computing, Atom Computing and Pasqal, spins in silicon by Silicon Quantum Computing, quantum dots, molecular qubits, and topological quantum systems, which will not be reviewed in this paper. As mentioned in Section 3.2, most of the current quantum devices and those that are in development, with the exception of the D-Wave quantum annealers (Section 3.2.2), follow the quantum circuit model (Section 3.2.1). All of the mentioned quantum devices have system-level attributes, which need to be considered in multi-qubit systems. Significant scientific and engineering efforts are needed to improve and optimize these devices. The most important attributes are discussed in the remainder of this section. The technical challenges of NISQ devices discussed below have a major impact on the current state of quantum algorithms. The algorithms can be roughly split into two camps: (1) near-term algorithms designed to run on NISQ devices and (2) algorithms with a theoretically proven advantage for when hardware advances enough but require a large number of logical qubits with extremely high-fidelity quantum gates. In this work we cover both types of algorithms. One should keep in mind that arguably none of the discussed algorithms implemented on NISQ devices provide a decisive advantage over classical algorithms and computers yet. Qubits lose their desired quantum state or decohere (Section 3.1) over time. The decoherence times of each of the qubits are important attributes of a quantum device. Various mechanisms of qubit decoherence exist, such as quantum amplitude and phase relaxation processes and depolarizing quantum noise [256] . One potentially serious challenge for superconducting systems is cosmic rays [330] . While single-qubit decoherence is relatively well understood, the multi-qubit decoherence processes, generally called crosstalk, pose more serious challenges [294] . Even two-qubit operations have an order of magnitude higher error rate than do single-qubit operations. This makes it difficult to entangle a large number of qubits without errors. Various error-mitigation techniques [64, 337] have been developed for the near term. In the long term, quantum error correction using logical operations, briefly mentioned in Section 3.2, will be necessary [132, 200] . However, this requires multiple orders of magnitude more physical qubits than available on today's NISQ systems. Thus, algorithms executed on current hardware must contend with the presence of noise and are typically limited to low-depth circuits. In addition, current quantum error correction techniques theoretically combat local errors [132, 200, 294] . The robustness of these techniques to non-local errors is still being investigated [133] . Quantum circuits need to be optimally mapped to the topology of a quantum device in order to minimize errors and total run time. With current quantum hardware, qubits can interact only with neighboring subsets of other qubits. Existing superconducting and trapped-ion processors have a fixed topology that defines the allowed connectivity. However, trapped-ion devices can change the ordering of the ions in the trap to allow for, originally, non-neighboring qubits to interact [267] . Since this approach utilizes a different degree of freedom from that used to store the quantum information for computation, technically the device can be viewed as having an all-to-all connected topology. For existing superconducting processors, the positioning of the qubits cannot change. As a result, two-qubit quantum operations between remote qubits have to be mediated by a chain of additional twoqubit SWAP gates via the qubits connecting them. Moreover, the two-qubit gate error of current NISQ devices is high. Therefore, quantum circuit optimizers and quantum hardware must be developed with these limitations in mind. Connectivity is also a problem with current quantum annealers, which is discussed in Section 6.1.1. Having fast quantum gates is important for achieving quantum supremacy and quantum advantage with NISQ devices in the quantum circuit model. In general, quantum devices operate at a much slower speed compared with modern CPUs. However, some types of quantum devices are particularly slow, for example trapped-ion quantum processors, compared with superconducting devices, although these devices typically have lower gate error rates. There is a well-known trade-off between speed and fidelity. Execution time is particularly relevant for algorithms that require a large number of circuit execution repetitions, such as variational algorithms (Section 4.7) and sampling circuits. Error rates typically increase sharply if the gate time is reduced below a certain duration. Thus, one must find the right balance, which often requires tedious calibrations and fine-tuning. Another important attribute, specific to gate-based quantum devices, is the set of available quantum gates that can be executed natively, namely, those that map to physical operations on the system (Section 3.2.1). The existence of a diverse universal set of native gates is crucial for designing short-depth high-fidelity quantum circuits. Developing advanced quantum compilers is, therefore, critical for efficiently mapping general quantum gates to the native gates of a particular device. In this section, we discuss foundational quantum algorithms that are the building blocks for more sophisticated algorithms. These algorithms have been extended to address problems in different domains, particularly finance, as described Sections 5, 6, and 7. We briefly review the concepts from computational complexity theory that are relevant to the discussions in this survey. We refer the reader to the references for more rigorous discussions on the topic. Computational complexity, the efficiency in time or space with which computational problems can be solved, is presented by using the standard asymptotic "Big-O" notation [97] : the set O(f (n)) for an upper bound on complexity (worst case) and the set Ω(g(n)) for a lower bound on complexity (best case). Also, h(n) ∈ Θ(f (n)) if and only if h ∈ O(f (n)) and h ∈ Ω(f (n)). 4 Computational problems, which are represented as functions, are divided into complexity classes. The most common classes are defined based on decision problems, which can be represented as Boolean functions. P denotes the complexity class of decision problems that can be solved in polynomial time (i.e., the time required is upper bounded, in "Big-O" terms, by a polynomial in the input size) by a deterministic Turing machine (TM), in other words, traditional classical computer [201, 256] . An amount of time or space required for computation that is upper bounded by a polynomial is called asymptotically efficient, whereas, for example, upper bounded by an exponential function is not. However, asymptotic efficiency does not necessarily imply efficiency in practice; the coefficients or order of the polynomial can be very large. NP denotes the class of decision problems for which proofs of correctness exist, called witnesses, that can be executed in polynomial time on a deterministic TM [201] . A problem is hard for a complexity class if any problem in that class can be reduced in polynomial time to it, and it is complete if it is both hard and contained in the class [24] . Problems in P can be solved efficiently on a classical or quantum device. While NP contains P, it also contains many problems not known to be efficiently solvable on either a quantum or classical device. #P is a set of counting problems. More specifically, it is the class of functions that count the number of witnesses that can be computed in polynomial time on a deterministic TM for NP problems [24] . BQP, which contains P, denotes the class of decision problems solvable in polynomial time on a quantum computer with bounded probability of error [256] . A common way to represent the complexity of quantum algorithms is by using query complexity [16] . Roughly speaking, the problem setup provides the algorithm access to functions that the algorithm considers as "black boxes." These are represented as unitary operators called quantum oracles. The asymptotic complexity is computed based on the number of calls, or queries, to the oracles. The goal of quantum algorithms research is to develop quantum algorithms that provide computational speedups, a reduction in computational complexity. However, it is common to find algorithms with improved efficiency in practice without any proven reduction in complexity. These algorithms typically utilize heuristics. As mentioned in Section 3.3, a majority of algorithms for NISQ fall into this category. When one can theoretically show a reduction in asymptotic complexity, the algorithm has a provable speedup for the problem. In the discussions that follow, we emphasize the category into which each algorithm currently falls. Grover's algorithm [152] is a quantum procedure for solving unstructured search with an asymptotic quadratic speedup, in terms of the number of queries made, over the best known classical algorithm. The goal of unstructured search is: Given an oracle f : X → {0, 1}, find w ∈ X such that f (w) = 1. Sometimes w is also called a marked element. More specifically, the algorithm amplifies the probability of measuring the state |w encoding w such that f (w) = 1. In Grover's algorithm, a marked state is identified by utilizing a phase oracle: |x , which can be a composition of an oracle computing f into a register and a conditional phase gate. One can represent f by a unitary utilizing techniques for reversible computation [256] . Classically, in the worst case f will be evaluated on all items in the set, O(N ) queries. Grover's algorithm makes O( √ N ). The complexity stated for this algorithm is said to be in terms of quantum query complexity. In fact, this algorithm is optimal for unstructured search [41] . Along with an efficient unitary simulating f , the algorithm requires the ability to query uniform superpositions of states representing elements of X . This criterion is typically referred to as quantum access to classical data [188] . The general case can be achieved by using quantum random access memory (qRAM) [139] . As of this writing, the problems associated with constructing a physical qRAM have not been solved. However, a uniform superposition over computational basis states (i.e., with Hadamard gates, Section 3.2.1) can be used for the simple case where X = {0, . . . , N − 1}. Moreover, hybrid techniques have been developed to deal with larger classical data sets without qRAM [159] . Ambainis [14] developed a variable-time version of the algorithm that more efficiently handles cases when queries can take different times to complete. The quantum amplitude amplification (QAA) algorithm [62] is a generalization of Grover's algorithm (Section 4.1). Suppose there exists an oracle O φ that marks a quantum state |φ (e.g., O φ only acts nontrivially on |φ , say, by adding a phase shift by π). Also, assume there is a unitary U and |Φ := U |ψ for some input state |ψ . In addition, if |Φ has nonzero overlap with the marked state |φ , namely, Φ| φ = 0, then without loss of generality |Φ = sin(θa) |φ + cos(θa)|φ ⊥ , where φ| φ ⊥ = 0 and sin(θa) = Φ| φ . For example, in the case of Grover's algorithm, |Φ is a uniform superposition over states corresponding to the elements of X , and |φ is the marked state. QAA will amplify the amplitude sin(θa) and as a consequence the probability sin 2 (θa) of measuring |φ (using an observable with |φ as an eigenstate) to Ω(1) 5 using O( 1 sin(θa) ) queries to U and O φ . This is done utilizing a series of interleaved reflection operators involving U , |ψ , and O φ [62] . This algorithm can be understood as a quantum analogue to classical probabilistic boosting techniques and achieves a quadratic speedup [62] . Classically, O( 1 sin 2 (θa) ) samples would be required to measure |φ with Ω(1) probability. QAA is an indispensable technique for quantum computation due to the inherit randomness of quantum mechanics. Improvements to the algorithm have been made to handle issues that occur when it is not possible to perform the required reflection about |ψ [44] , the number of queries to make (which requires knowing sin(θa)) is not known [57, 153] , and O φ is imperfect with bounded error [178] . The last two improvements are also useful for Grover's algorithm (Section 4.1). A version also exists for quantum algorithms (i.e., a unitary U ) that can be decomposed into multiple steps with different stopping times [15] . The quantum phase estimation (QPE) algorithm [199] serves as a critical subroutine in many quantum algorithms such as HHL (Section 4.5). Given a desired additive error of δ, QPE makes O( 1 δ ) queries to a unitary U to produce an estimateθj of the phase θj ∈ [0, 1) of one of the eigenvalues of U , e i2πθ j , such that |θj −θj| ≤ δ. The probability of successfully measuring an estimateθj to the actual eigenvalue phase θj within error δ is Ω(1) and can be boosted to 1 − by adding and then discarding O(log(1/ )) ancillary qubits [91, 256] . Alternatively, it can be boosted to 1 − 1 poly(n) by using O(log(n)) repetitions of QPE and taking the most frequent estimate, according to standard Chernoff bounds [271] . This brings the overall query complexity to O( 1 δ ) and O( log(n) δ ), respectively. In the cases where the actual eigenvalue phase θj, can be represented with finite precision, QPE can perform the transformation |vj |0 → |vj |θj , when |vj is an eigenvector of U . A more detailed explanation of the algorithm and what happens when the eigenvalue phases cannot be perfectly represented with the specified precision, δ, is provided by Nielsen and Chuang [256] . The quantum amplitude estimation (QAE) algorithm [62] estimates the total probability of measuring states marked by a quantum oracle. Consider the state |Φ , oracle O φ , unitary U , and amplitude sin(θa) as defined in Section 4.2. QAE utilizes O( 1 δ sin(θa) ) queries to U and O φ to estimate sin 2 (θa) to relative additive error δ sin 2 (θa) [271] . The algorithm utilizes QPE (Section 4.3) as a subroutine, which scales as O( 1 δ ), for desired additive error δ. Some variants of QAE do not make use of QPE [141, 150, 314] and are potentially more feasible on near-term devices. An important application of QAE to finance is to perform Monte Carlo integration, discussed in Section 5. Other applications of QAE include estimating the probability of success of a quantum algorithm and counting. The quantum linear systems problem (QLSP) is defined as: Given an O(1)-sparse invertible Hermitian matrix A ∈ R N ×N and a vector b ∈ R N output a quantum state |x that is the solution to A x = b up to a normalization constant with bounded error probability [110] . While this definition requires the sparsity to be independent of the dimensions of the matrix, the algorithms for quantum linear systems (QLS) have a polynomial dependence on sparsity. As can be understood by viewing the time complexities of these algorithms, we can allow for O(polylog(N ))-sparse matrices, which is what is meant when we simply say "a sparse matrix" [88] . The O(1)sparsity specification comes from the decision problem version of QLSP that is BQP-complete [110, 160] . The Harrow-Hassidim-Lloyd (HHL) algorithm [160] is the first algorithm invented for solving the QLSP. It provides an exponential speedup, in the system size N , for well-conditioned matrices (for QLSP this implies a condition number in O(polylog(N ))) [1] , over all known classical algorithms for a simulation of this problem. For matrix sparsity s and condition number κ, HHL runs with worst-case time complexity O(s 2 κ 2 log(N )/ ), for desired error . This complexity was computed under the assumption that a procedure based on higher-order Suzuki-Trotter methods [43, 89] was used for sparse Hamiltonian simulation. However, a variety of more efficient techniques have been developed since the paper's inception [45, 225] . These potentially reduce the original HHL's quadratic dependence on sparsity. HHL's query complexity, which is independent of the complexity of Hamiltonian simulation, is O(κ 2 / ) [99] . Alternative quantum linear systems solvers also exist that have better dependence on some of the parameters, such as almost linear in the condition number from Ambainis [15] and polylogarithmic dependence on 1/ for precision from Childs, Kothari and Somma (CKS) [88] . In addition, Wossnig et al. [344] utilized the quantum singular value estimation (QSVE) algorithm of Kerenedis and Prakash [188] to implement a QLS solver for dense matrices. This algorithm has O( √ N polylog(N )) dependence on N and hence offers no exponential speedup. However, it still obtains a quadratic speedup over HHL for dense matrices. Following this, Kerenedis and Prakash generalized the QSVE-based linear systems solver to handle both sparse and dense matrices and introduced a technique for spectral norm estimation [189] . In addition, the quantum singular value transform (QSVT) framework provides methods for QLS that have the improved dependencies on κ and 1/ mentioned above [73, 149] . The QSVT framework also provides algorithms for a variety of other linear algebraic routines such as implementing singular-value-threshold projectors [138, 188] and matrix-vector multiplication [73] . Alternatively to QLS based on the QSVT, there is a discrete-time adiabatic (Section 3.2.2) approach [117] to the QLSP by Costa et al. [99] that has optimal [160] query complexity: linear in κ and logarithmic in 1/ . Since QLS algorithms invert Hermitian matrices, they can also be used to compute the Moore-Penrose pseudoinverse of an arbitrary matrix. This requires computing the Hermitian dilation of the matrix and filtering out singular values that are near zero [138, 160] . While solving the QLSP does not provide classical access to the solution vector, this can still be done by utilizing methods for vector-state tomography [190] . Applying tomography would no longer allow for an exponential speedup in the system size N . However, polynomial speedups using tomography for some linearalgebra-based algorithms are still possible (Section 6.2). In addition, without classical access to the solution, certain statistics, such as the expectation of an observable with respect to the solution state, can still be obtained without losing the exponential speedup in N . Providing quantum access to A and b is a difficult problem that can, in theory, be dealt with by using quantum models for data access. Two main models of data access for quantum linear algebra routines exist [73] : the sparse-data access model and the quantum-accessible data structure. The sparse-data access model provides efficient quantum access to the nonzero elements of a sparse matrix. The quantum-accessible data structure, suitable for fixed-sized, non-sparse inputs, is a classical data structure stored in a quantum memory (e.g., qRAM) and provides efficient quantum queries to its elements [73] . Lastly, for problems that involve only low-rank matrices, classical Monte Carlo methods for numerical linear algebra (e.g., the FKV algorithm [134] ) can be used. These techniques have been used to produce classical algorithms that have an asymptotic exponential speedup in dimension for various problems involving low-rank linear algebra (e.g., some machine learning problems) [82, 317] . As of this writing, these classical dequantized algorithms have impractical polynomial dependence on other parameters, making the quantum algorithms still useful. In addition, since these results do not apply to sparse linear systems, there is still the potential for provable speedups for problems involving sparse, high-rank matrices. Formally, random walks are discrete-time stochastic processes formed through the summation of independent and identically distributed random variables [209] and play a fundamental role in finance [50] . In the limit, such discrete-time processes approach the continuous-time Wiener process [202] . One type of stochastic process whose logarithm follows a Wiener process is geometric Brownian motion, commonly used to model market uncertainty (Section 2.2.1). Random walks have also been generalized to the vertices of graphs [224] . In this case they can be viewed as finite-state Markov chains over the set of vertices [257] . Quantum walks are a quantum mechanical analogue to the processes mentioned above [185] . Generally, quantum walks in discrete time can be viewed as a Markov chain over its edges (i.e., a quantum analogue of a classical bipartite walk) [229, 315] and consists of a series of interleaved reflection operators. This generalizes QAA (Section 4.2) and Grover's algorithm (Section 4.1) [18, 304] . Discrete-time quantum walks have be shown to provide polynomially faster mixing times [4, 17, 243] as well as polynomially faster hitting times for Markov-chain-based search algorithms [13, 19, 23, 84, 229, 309, 315] . Alternatively, there are quantum walks in continuous time [86, 125] , which have certain provable advantages [87] , using a different notion of hitting time. A similar result, using this notion, has also been shown for discrete-time quantum walks [186] . In addition, the various mixing operators used in QAOA (Section 6.1.2) can be viewed as continuous-time quantum walks connecting the feasible solutions [289] . There have been multiple proposed unifications of quantum walks in continuous and discrete time [83, 327] , something that is true for the classical versions [202] . For a comprehensive review of quantum walks, see the survey by Venegas-Andraca [327] ; and for an overview and a discussion of connections between the various quantum-walk-based search algorithms, see the work of Aspers et al. [23] . Variational quantum algorithms (VQAs) [71] , also known as classical/quantum hybrid algorithms, are a class of algorithms, typically for gate-based devices, that modify the parameters of a quantum (unitary) procedure based on classical information obtained from running the procedure on a quantum device. This information is typically in the form of a cost function dependent on the expectations of observables with respect to the state that the quantum procedure produces. In general, the training of quantum variational algorithms is NP-hard [49] . In addition, these methods are heuristic in nature. The primordial quantum-circuit-based variational algorithm is called the variational quantum eigensolver (VQE) [266] and is used to compute the minimum eigenvalue of an observable (e.g., the ground-state energy of a physically realizable Hamiltonian). However, it is also applicable to optimization problems (Section 6.1.3), and both linear (e.g., quantum linear systems [63, 175] ) and nonlinear problems (e.g., nonlinear PDEs) [226] . VQE utilizes one of the quantum variational principles based on the Rayleigh quotient of a Hermitian operator (Section 6.1.3). The quantum procedure that provides the state, called the ansatz, is typically a parameterized quantum circuit (PQC) built from Pauli rotations and two-qubit gates (Section 3.2.1). For VQAs in general, finding the optimal set of parameters is a non-convex optimization problem [177] . A variation of this algorithm, specific to combinatorial optimization problems, known as the quantum approximate optimization algorithm (QAOA) [129] , utilizes the alternating operator ansatz [157] and applies only to diagonal observables. There are also variational algorithms for approximating certain dynamics, such as real-time and imaginary-time quantum evolution that utilize variational principles for dynamics [348] (e.g., McLachlan's principle [234] ). Furthermore, variational algorithms utilizing PQCs and various cost functions have been applied to machine learning tasks (Section 7.6) [239] . Problems using VQAs are seen as one of the leading potential applications of near-term quantum computing. Adiabatic quantum optimization [127] , commonly referred to as quantum annealing (QA), is an AQC (Section 3.2.2) algorithm for quadratic unconstrained binary optimization (QUBO). QUBO is an NP-hard combinatorial optimization problem. These problems are discussed in Section 6.1. The main reason behind the interest in QA is that it provides a nonclassical heuristic, called quantum tunneling, that is potentially helpful for escaping from local minima. Its closest classical analogue is simulated annealing [324] , a Markov chain Monte Carlo method that was inspired by classical thermodynamics and uses temperature as a heuristic to escape from local minima. 6 So far QA has proven useful for solving QUBOs with tall yet narrow peaks in the cost function [109, 184] . The overall benefit of QA is still a topic of ongoing research [109, 128] . However, the current scale of quantum annealers allows for potential applicability in the near term [7] . Monte Carlo methods utilize sampling to approximate the solutions to problems that are intractable to solve analytically or with numerical methods that scale poorly for high-dimensional problems. Classical Monte Carlo methods have been used for inference [253] , integration [280] , and optimization [170] . Monte Carlo integration (MCI), the focus of this section, is critical to finance for risk and pricing predictions [143] . As mentioned in Section 2.2.1, these tasks often require large amounts of computational power and time to achieve the desired precision in the solution. Interestingly, there exists a quantum algorithm with a proven advantage over classical Monte Carlo methods for numerical integration. In finance, MCI typically is used to estimate the expectation of an unknown financial quantity (e.g., price of a financial derivative), which is a function of other nondeterministic variables (e.g., market state). The methodology usually starts with a stochastic model for the underlying random variables, from which samples of these variables can be taken and the corresponding values of the target quantity may be subsequently evaluated given the samples drawn. The average of all obtained values of the target quantity then may be computed and used as the estimate for that quantity. For example, in the case of derivative pricing, the underlying random variables could be the prices of an asset at various time points in the future, denoted as X1, . . . , Xn. These random variables are often assumed to follow a diffusion process [202] , and the collective values of the random variables from the outcome of a single drawing are known as a sample path. The goal is then to estimate the expected payoff of a financial contract derived from this asset, generally formulated as E[g(X1, . . . , Xn)]. In order to estimate the expected payoff, many sample paths are drawn, and sample averages of the payoff function, g(X1, . . . , Xn) , are computed. The estimation error decays as O( 1 √ Ns ) independent of the problem dimension, where Ns is the number of paths taken. This is in accordance with the law of large numbers [280] . The QAE algorithm (Section 4.4) provides a quadratic speedup for Monte Carlo integration [241] . 7 With QAE, using the derivative pricing example, the error in the expectation computation decays as O( 1 Nq ), where Nq is the number of queries made to a quantum oracle that computes g (X1, . . . , Xn) . This is in contrast to the complexity in terms of the number of samples of g(X1, . . . , Xn) mentioned earlier for classical Monte Carlo integration. Thus, if samples are considered as classical queries, QAE requires quadratically fewer queries than classical MCI requires in order to achieve the same desired error. The general procedure for Monte Carlo integration utilizing QAE is outlined below [72] . Let Ω be the set of potential sample paths ω of a stochastic process that is distributed according to p(ω), and f : Ω → A is a real-valued function on Ω, where A ⊂ R is bounded. The task is to compute E[f ] [241] , which can be achieved with QAE in three steps as follows. 1. Construct a unitary operator P l to load a discretized and truncated version of p(ω). The probability value p(ω) translates to the amplitude of the quantum state |ω representing the discrete sample path ω. In mathematical form, it is 2. Convert f into a normalized functionf : Ω → [0, 1], and construct a unitary P f that computesf (ω) and loads the value onto the amplitude of |ω . 8 The resultant state after applying P l and P f is 3. Using the notation from Section 4.4, perform quantum amplitude estimation with U = P f P l and an oracle, O φ , that marks states with the last qubit being |1 . The result of QAE will be an approximation to There are two main financial applications that can be solved via quantum Monte Carlo integration: derivative pricing and risk analysis, which are presented in this section. We next discuss the application of quantum MCI to the pricing of financial derivatives. Specifically, we consider two types of derivatives: options and collateralized debt obligations (CDOs). Options. The definition of an option was mentioned in Section 2.2.1. The goal of option pricing is to determine the current fair value of an option given the sources of uncertainty about the future movements of the underlying asset price and other market variables. In order to numerically estimate the fair value, a large number of samples of the market variables are generated, upon which Monte Carlo integration is applied to compute the expected value of a payoff function [58] . Stamatopoulos et al. [311] constructed quantum procedures for the uncertainty loading P l and the payoff function P f for European options, which are path independent. P l loads a distribution over the computational basis states representing the price at maturity. P f approximately computes the rectified linear function, characteristic of the payoff for European options. Quantum MCI has also been applied to path-dependent options (Section 2.2.1) such as barrier and Asian options [278, 311] , and multiasset options [311] . For these options, prices for different assets or at different points on a sample path are represented by separate quantum registers, and the payoff operator P f is applied to all of these registers. Pricing of more complicated derivatives with QAE can be found in the literature. For example, Chakrabarti et al. performed an in-depth resource estimation and error analysis for QAE applied to the pricing of autocallable options and target accrual redemption forwards [72] . assets that serve as collateral if the loan defaults. The tool is used to free up capital and shift risk. A typical CDO pool is divided into three tranches: equity, mezzanine, and senior. The equity tranche is the first to bear loss; the mezzanine tranche investors bear loss if the loss is greater than the first attachment point; and the senior tranche investors lose money if the loss is greater than the second attachment point. Although the senior tranche is protected from loss, other default events can cause the CDO to collapse, such as events that caused the 2008 financial crisis [318] . Conditional independence models [213] are often used for estimating the chances of parties defaulting on the credit in the CDO pool. In such models, given the systemic risk Z, the default risks X1, . . . , Xn of the assets involved are independent. Z is then used as a latent variable to introduce correlations between the default risks. The distributions of the default risks and the systemic risk are usually either Gaussian [213] or normal-inverse Gaussian [30] . Tang et al. [318] presented quantum procedures for both the P l and P f operators used for quantum Monte Carlo integration. The goal was to utilize QAE to estimate the expected loss of a given tranche. P l is a composition of rotations to load the uncorrelated probabilities of default (i.e., Bernoulli random variables). Then a procedure is used for loading the distribution of Z, and controlled rotations are used to apply the correlations. P f computes the loss under a given realization of the variables X1, . . . , Xn and uses a quantum comparator to determine whether the loss falls within the specified tranche range. QAE is then used, instead of classical MCI, to compute the expectation of the total loss given that it falls within the tranche range. In the following discussions, we review proposals for applying quantum MCI to the computation of important financial risk metrics (Section 2.2.1): value at risk and economic capital requirement. [342] ; α is generally around 99.9% for the finance industry. QAE can be used to evaluate VaRα and CVaRα faster than with classical MCI. The procedure to do this, by Woerner and Egger [121, 342] , is as follows. Similar to the CDO case above (Section 5.1.1), the risk can be modeled by using a Gaussian conditional independence model. Quantum algorithms for computing VaRα and CVaRα via QAE can utilize the same realization of P l as for the CDO case [342] . Note that P[X ≤ x] = E[h(X)], where h(X) = 1 for X ≤ x and 0 otherwise. Thus P f implements a quantum comparator similar to both of the previous applications discussed for derivatives. A bisection search can be used to identify the value of x such that P[X ≤ x] ≥ α using multiple iterations of QAE. CVaRα uses a comparator to check against the obtained value for VaRα when computing the conditional expectation of X. Economic Capital Requirement. Economic capital requirement (ECR) summarizes the amount of capital required to remain solvent at a given confidence level and time horizon [121] . Egger et al. [121] presented a quantum method using quantum MCI to compute ECR. Consider a portfolio of K assets with L1, . . . , LK denoting the losses associated with each asset. The expected value of the total loss, L, is The losses can be modeled by using the same Gaussian conditional independence model discussed above. ECR at confidence level α is defined as where α is generally around 99.9% for the finance industry. The expected total loss E[L] can be efficiently computed classically, so only the estimate of VaRα[L] is computed by using QAE [121] . As of this writing, no demonstration of quantum advantage (Section 3) with quantum MCI has been made, because of the limitations of current quantum hardware (Section 3.3). For example, many of the financial applications discussed above were executed by using fewer than five qubits, which means that the quantum circuits can be simulated easily on a laptop. Two of the main limitations are loading the stochastic dynamics onto the quantum device and computing the payoff function [55] . These procedures, for realistic problems, require high precision (many qubits) for arithmetic, error correction for high-quality qubits, and high clock frequencies to handle the large T gate (Section 3.2.1) or Toffoli gate depths 9 [72] . Furthermore, the loading of arbitrary uncertainty distributions requires exponentially deep circuits in terms of the number of qubits [270] . The Grover-Rudolph algorithm [151] , a proposed efficient distribution loading strategy for quantum MCI, has been shown to be insufficient to achieve quantum speedup [165] . In addition, the resources for the arithmetic need to be tuned carefully to not lose the quadratic speedup. If fault tolerance is to be taken into account, advancements in the field of quantum error correction are required in order to realize quantum advantage [27] . Despite these challenges, some progress has been made to alleviate some of the difficulties in implementing quantum MCI. For distribution loading, variational methods (Section 4.7) have been developed to load stochastic models [351] , such as normal distributions [72] . More specifically, models based on geometric Brownian motions can be implemented by utilizing procedures for loading standard normal distributions, followed by quantum arithmetic to modify the distribution's parameters [72, 158] . Moreover, the number of required oracle calls to the distribution loading and payoff procedures used by QAE could be reduced at the cost of a smaller speedup [141, 142] . The difficulty in implementing QPE in the near term required for QAE can be mitigated by using the variants mentioned in Section 4.4. In addition, Plekhanov et al. proposed a variational QAE [269] algorithm that combines the approximation of quantum states via variational methods and low-depth PQCs with the maximumlikelihood-estimation approach to QAE of Suzuki et al. [314] . Potential methods for near-term implementations of payoff functions have been made by Herbert [166] , who considered certain functions that can be approximated by truncated Fourier series. Also worth noting are non-unitary methods for efficient state preparation [144, 275] . However, these methods are usually incompatible with QAE because of the non-invertibility of the operations involved. The methods would be more useful in a broader context where the inverse of the state preparation operation is not required; this topic is out of the scope of this section. Solving optimization problems (e.g., portfolio optimization, arbitrage) presents the most promising commercially relevant applications on NISQ hardware (Section 3.3). In this section we discuss the potential of using quantum algorithms to help solve various optimization problems. We start by discussing quantum algorithms for the NPhard 10 combinatorial optimization problems [173, 343] in Section 6.1, with particular focus on those with quadratic cost functions. Neither classical nor quantum algorithms are believed to be able to solve these NP-hard problems asymptotically efficiently (Section 4). However, the hope is that quantum computing can solve these problems faster than classical computing on realistic problems such that it has an impact in practice. Section 6.2 focuses on convex optimization [56] . In its most general form convex optimization is NP-hard. However, in a variety of situations such problems can be solved efficiently [254] and have a lot of nice properties and a rich theory [56] . There are quantum algorithms utilizing QLS (Section 4.5) and other techniques that provide polynomial speedups for certain convex, specifically conic, optimization problems. These can have a significant impact in practice. Section 6.3 discusses large-scale optimization, which utilizes hybrid approaches that decompose large problems into smaller subtasks solvable on a near-term quantum device. The more general mixed-integer programming problems, in which there are both discrete and continuous variables, can potentially be handled by using quantum algorithms for optimization (both the ones mentioned in Section 6.1 and, in certain scenarios, Section 6.2) as subroutines in a classical/quantum hybrid approach (i.e., decomposing the problem in a similar manner to the ways mentioned in Section 6.3) [7, 59] . Many quantum algorithms for combinatorial optimization are NISQ friendly and heuristic (i.e., with no asymptotically proven benefits). The convex optimization algorithms are usually for the fault-tolerant era of quantum computation. In the general case, integer programming (IP) problems, which involve variables restricted to integers (including those with integer variable constraints), are NP-hard [343] . This section focuses on combinatorial optimization, sometimes used synonymously with discrete or integer optimization. In this paper, combinatorial optimization refers specifically to integer optimization problems whose constraints allow only for binary variables, namely, binary integer programs [343] . Problems that consist of selecting objects from a finite set to optimize some cost function, potentially including constraints, fit this formalism. However, integer programs can be reduced to binary programs by representing integers in binary or using one-hot encoding. Most of the financial optimization problems presented in this article can be reduced to combinatorial problems with quadratic cost functions called binary quadratic programming problems [343] . A binary quadratic program without constraints, which is also NP-hard, is called a quadratic unconstrained binary optimization problem [204] and lends itself naturally to many quantum algorithms. A QUBO problem on N binary variables can be expressed as min where b ∈ R N , Q ∈ R N ×N , and B = {0, 1}. Unconstrained integer quadratic programming (IQP) problems can be converted to QUBO in the same way that IP can be reduced to general binary integer programming [259] . Constraints can be accounted for by optimizing the Lagrangian function [56] of the constrained problem, where each dual variable is a hyperparameter that signifies a penalty for violating the constraint [227] . QUBO has a connection with finding the ground state of the generalized Ising Hamiltonian from statistical mechanics [227] . A simple example of an Ising model is a two-dimensional lattice under an external longitudinal magnetic field that is potentially site-dependent; in other words, the strength of the field at a site on the lattice is a function of location. At each site j, the direction of the magnetic moment of spin zj may take on values in the set {−1, +1}. The value of zj is influenced by both the moments of its neighboring sites and the strength of the external field. This model can be generalized to an arbitrary graph, where the vertices on the graph represent the spin sites and weights on the edges signify the interaction strength between the sites, hence allowing for long-range interactions. The classical Hamiltonian for this generalized model is where the magnitude of Jij represents the amount of interaction between sites i and j and its sign is the desired relative orientation; hj represents the external field strength and direction at site j [182] . Using a change of variables zj = 2xj − 1, a QUBO cost function as defined in Equation (3) can be transformed to a classical Ising Hamiltonian, with xj being the jth element of x. The classical Hamiltonian can then be "quantized" by replacing the classical variable zj for the jth site with the Pauli-Z operator σ z j (Section 3.1). Therefore, finding the optimal solution for the QUBO is equivalent to finding the ground state of the corresponding Ising Hamiltonian. Higher-order combinatorial problems can be modeled by including terms for multispin interactions. Some of the quantum algorithms presented in this section can handle such problems (e.g., QAOA in Section 6.1.2). From the quantum point of view, these higher-order versions are observables that are diagonal in the computational basis. The following subsections will more fully show why the quantum Ising or more generally the diagonal observable formulation results in quantum algorithms for solving combinatorial problems. In Section 6.4 we discuss a variety of financial problems that can be formulated as combinatorial optimization and solved by using the quantum methods described in this section. Quantum annealing, initially mentioned in Section 4.8, consists of adiabatically evolving according to the timedependent transverse-field Ising Hamiltonian [182] , which is mathematically defined as This formulation natively allows for solving QUBO. As discussed earlier, however, with proper encoding and additional variables, unconstrained IQP problems can be converted to QUBO. Techniques also exist for converting higher-order combinatorial problems into QUBO problems, at the cost of more variables [74, 115] . QUBO, as mentioned earlier, can, through a transformation of variables, be encoded in the site-dependent longitudinal magnetic field strengths {hi} and coupling terms {Jij} of an Ising Hamiltonian. The QA process initializes the system in a uniform superposition of all states, which is the ground state of − i σ x i . σ x i is the Pauli-X (Section 3.2) operator applied to the ith qubit. This initialization implies the presence of a large transverse-field component, A(t) in Equation (4). The annealing schedule, controlled by tuning the strength of the transverse magnetic field, is defined by the functions A(t) and B(t) in Equation (4) . If the evolution is slow enough, according to the adiabatic theorem (Section (3.2.2)), the system will remain in the instantaneous ground state for the entire duration and end in the ground state of the Ising Hamiltonian encoding the problem. This process is called forward annealing. There also exists reverse annealing [198] , which starts in a user-provided classical state, 11 increases the transverse-field strength, pauses, and then, assuming an adiabatic path, ends in a classical state. The pause is useful when reverse quantum annealing is used in combination with other classical optimizers. The D-Wave devices are examples of commercially available quantum annealers [258] . These devices have limited connectivity resulting in extra qubits being needed to encode arbitrary QUBO problems (i.e., Ising models with interactions at arbitrary distances). Finding such an embedding, however, is in general a hard problem [221] . Thus heuristic algorithms [68] or precomputed templates [146] are typically used. Assuming the hardware topology is fixed, however, an embedding can potentially be efficiently found for QUBOs with certain structure [329] . Alternatively, the embedding problem associated with restricted connectivity can be avoided if the hardware supports three-and four-qubit interactions, by using the LHZ encoding [210] or its extension by Ender et al. [123] . The extension by Ender et al. even supports higher-order binary optimization problems [118, 130] . Moreover, the current devices for QA provide no guarantees on the adiabaticity of the trajectory that the system follows. Despite the mentioned limitations of today's annealers, a number of financial problems still remain that can be solved by using these devices (Section 6.4). QAOA is a variational algorithm (Section 4.7) initially inspired by the adiabatic evolution of quantum annealing. The initial formulation of QAOA can be seen as a truncated or Trotterized [256] version of the QA evolution to a finite number of time steps. 12 QAOA follows the adiabatic trajectory in the limit of infinite time steps, which provides evidence of good convergence properties [129] . This algorithm caters to gate-based computers. However, it has been tested on an analog quantum simulator [164, 307] . QAOA can be used to solve general unconstrained combinatorial problems. This capability allows QAOA to solve, without additional variables, a broader class of problems than QA can. Like for QA, constraints can be encoded by using the Lagrangian of the optimization problem (i.e., penalty terms). Consider an unconstrained combinatorial maximization problem with N binary variables {xi} N i=1 concatenated into the bit string z := x1 . . . xN and cost function f (z). The algorithm seeks a string z * that maximizes f . This is done by first preparing a parameter-dependent N -qubit quantum state (realizable as a PQC, Section 4.7): where γ = (γ1, . . . , γp), β = (β1, . . . , βp). U (C, γ) is the phase operator defined as U (C, γ) = e −iγC , where C is a diagonal observable that encodes the objective function f (z). In addition, U (B, β) is the mixing operator defined as U (B, β) = e −iβB , where, in the initial QAOA formulation, B = N i=1 σ x i . The initial state, |s , is a uniform superposition, which is an excited state of B with maximum energy. However, other choices for B exist that allow QAOA to incorporate constraints without penalty terms [157] . Equation (5), where the choice of B can vary, is called the alternating operator ansatz [157] . Preparation of the state is then followed by a measurement in the computational basis (Section 3.1), indicating the assignments of the N binary variables. The parameters are updated such that the expectation of C, that is, γ, β| C |γ, β , is maximized. Note that we can multiply the cost function by −1 to convert it to a minimization problem. The structure of the QAOA ansatz allows for finding good parameters purely classically for certain problem instances [129] . In general, however, finding such parameters is a challenging task. As mentioned earlier (Section 4.7), the training of quantum variational algorithms is NP-hard. However, a lot of recent research has been proposing practical approaches for effective parameter and required circuit depth estimation [135, 299, 302, 312 ]. The VQE algorithm, mentioned in Section 4.7, is a prominent algorithm in the NISQ era because it can be run on current hardware. Essentially, the algorithm solves the problem of approximating the smallest eigenvalue of some observable and gives a prescription for reconstructing an approximation of a corresponding eigenstate. Given an observable, the expected value or equivalently Rayleigh quotient (Section 4.7) with respect to any state is an upper bound of the smallest eigenvalue. There is equality only when the state is an eigenstate with the smallest eigenvalue. The variational procedure based on this principle is known as the Rayleigh-Ritz method [348] . VQE utilizes a parameterized ansatz to represent a space of candidate wave functions. A classical optimizer modifies the parameters to minimize the expectation of the observable (i.e., the cost function) and thus tries to approach the value of the minimum eigenvalue. The hybrid system works in an iterative loop where the trial wave function is loaded onto the quantum processor and then the classical computer feeds the quantum processor new sets of parameters to improve the initial trial state based on the computed expected value. VQE is more general than QAOA (Section 6.1.2). The reason is that QAOA specifically uses the alternating operator ansatz and finds only the minimum eigenvalue of diagonal observables. Thus VQE can also be used to find a minimum-eigenvalue state of the diagonal observable used to encode a combinatorial problem. In addition, VQE can utilize any PQC as an ansatz. However. certain ansätze are more suitable for particular problems because they take into account prior information about the ground-state wave function(s) [266] . Alternatively, there exist evolutionary, noise-resistant techniques for optimizing the structure of the ansatz and significantly increasing the space of candidate wave functions [274] . Lastly, Amaro et al. [12] introduced quantum variational filtering, which can be used to increase the probability of sampling low-eigenvalue states of an observable when the filtering operator is applied to an arbitrary quantum state. Their algorithm, Filtering VQE, was applied to combinatorial optimization problems and outperformed both standard VQE and QAOA. In relativistic terms, real-time evolution of a quantum state happens in Minkowski spacetime, which, unlike Euclidean space, has an indefinite metric tensor [249] . Imaginary-time evolution (ITE) [262] transforms the evolution to Euclidean space and is performed by making the replacement t → it = τ , known as a Wick rotation [339] . The imaginary time dynamics follow the Wick-rotated Schrödinger equation where H is a quantum Hamiltonian and Eτ = ψ(τ )|H |ψ(τ ) . The time-evolved state (solution to Equation (6)) is |ψ(τ ) = C(τ )e −Hτ |ψ(0) , where C(τ ) is a time-dependent normalization constant [232] . As τ − → ∞, the smallest eigenvalue of the non-unitary operator e −Hτ dominates, and the state approaches a ground state of H. This is assuming the initial state has nonzero overlap with a ground state. Thus, ITE presents another method for finding the minimum eigenvalue of an observable [232, 245] . For variational quantum ITE (VarQITE) [348] , the time-dependent state |ψ(τ ) is represented by a parameterized ansatz (i.e., PQC), |φ(θ(τ )) , with a set of time-varying parameters, θ(τ ). The evolution is projected onto the circuit parameters, and the convergence is also dependent on the expressibility of the ansatz [348] . However, quantum ITE using McLachlan's principle is typically expensive to implement because of the required metric tensor computations and matrix inversion [240, 348] . Benedetti et al. developed an alternative method for VarQITE that avoids these computations and is gradient free [40] . Their variational time evolution approach is also applicable to real-time evolution. VarQITE can be used to solve a combinatorial optimization problem by letting H encode the combinatorial cost function, in a similar manner to QAOA and VQE, and is also not restricted to QUBO. In addition, VarQITE has been used to prepare states that do not result from unitary evolution in real time, such as the quantum Gibbs state [352] . Specifically for finance it has been used to simulate the Feynman-Kac partial differential equation [10] . We briefly mention that the variational principles for real-time evolution, mentioned in Section 4.7, can be used to approximate adiabatic trajectories [77] . This can potentially be used for combinatorial optimization as well. Dürr and Høyer [120] presented an algorithm that applies quantum unstructured search (Section 4.1) to the problem of finding the global [29] minimum of a black-box function. This approach makes use of the work of Boyer et al. [57] to apply Grover's search when the number of marked states is unknown. Unlike the metaheuristics of quantum annealing and gate-based variational approaches, the Dürr-Høyer algorithm has a provable quadratic speedup in query complexity. That is, given a search space X of size N , the algorithm requires O( √ N ) queries to an oracle that evaluates the function f : X → R to minimize. A classical program would require O(N ) evaluations. The requirements to achieve quadratic speedup are the same as for Grover's algorithm. The quantum oracle in this case is Og y |x = (−1) gy (x) |x ; gy : X → {0, 1} is a classical function mapping gy(x) to 1 if f (x) ≤ y, otherwise 0; and y is the current smallest value of f evaluated so far [29] . The Grover adaptive search framework of Bulger et al. [67] generalizes and extends the Dürr-Høyer algorithm. Gilliam et al. [137] proposed a method for efficiently implementing the oracle Og y by converting to Fourier space. This applies particularly to binary polynomials, and thus this method can be used to solve arbitrary constrained combinatorial problems with a quadratic speedup over classical exhaustive search. Quantum algorithms have been invented for common convex optimization problems [56] such as linear programming, second-order cone programming (SOCP), and semidefinite programming (SDP). These conic programs have a variety of financial applications, such as portfolio optimization, discussed in Section 6.4.1. The quantum linear systems algorithms from Section 4.5 have been used as subroutines to improve algorithms for linear programming such as the simplex method [46] in the work by Nannicini [251] and interior-point methods (IPMs) [254] in the work by Kerenedis and Prakash [190] . The quantum IPM utilizes QLS solvers with tomography to accelerate the computation of the Newton linear system. Kerenedis and Prakash [195] also applied quantum IPM to provide, under certain conditions, small polynomial speedups for the primal-dual IPM method for SOCP [242] . Furthermore, various approaches using quantum computing to solve SDPs have been presented. Brandão [61] , Brandão et al. [60] , and van Apeldoorn et al. [323] proposed quantum improvements to the multiplicative-weights method of Arora and Kale [25] for SDP. Alternatively, Kerenedis and Prakash applied their quantum IPM to SDP [190] . Near-term quantum devices are expected to have a limited number of qubits in the foreseeable future. This is a major obstacle for real applications in finance that are large scale in many domains. The hybridization of quantum and classical computers using decomposition-based approaches is a prominent answer for this problem [300] . The idea behind this class of approaches is similar to what is done with numerical methods and highperformance computing. The main driving routine that is executed on a classical machine decomposes the problem into multiple parts. These subproblems are then each solved separately on a quantum device. The classical computer then combines solutions received from the quantum computer. For example, in quantum local search for modularity optimization [301] and an extension of it to a multilevel approach [322] , the main driving routine identifies small subsets of variables and solves each subproblem on a quantum device. A similar decomposition approach for finding maximum graph cliques was developed by Chapuis et al. [75] . In this section we explore multiple applications of the quantum optimization algorithms presented above to financial problems. As mentioned in Section 2.2.2, portfolio optimization is typically formulated as a quadratic, usually constrained, optimization problem, with solutions that are restricted to discrete variables, continuous variables, or both. This section first focuses on combinatorial formulations using the algorithms mentioned in Section 6.1. As mentioned there, the algorithms presented can be generalized to integer optimization problems. The second part of this section focuses on convex formulations of the problem solved by using algorithms from Section 6.2. The handling of mixed-integer programming, which is also a possible formulation of portfolio optimization (Section 2.2.2), was briefly mentioned at the beginning of Section 6. Combinatorial Formulations. The first combinatorial formulation considered is risk minimization or hedging [98] . The risk term is quadratic and is derived from the covariance or correlation matrix computed by using historical pricing information. A simplified version of this problem can be formulated as the following QUBO: min where Σ ∈ R N ×N is the covariance or correlation matrix. This problem can be modified to include budget constraints. With regard to this problem, Kalra et al. used a time-indexed correlation graph such that the vertices represent assets and an edge between vertices represents the existence of a significant correlation between them [183] . The price changes are modeled by data collected daily. Instead of continuous correlation values, Kalra et al. utilized a threshold on the correlations to decide whether to create edges between assets. This was to create sparsity in the graph. One approach they took to solve risk minimization was formulating the problem as finding a maximum-independent set in the graph constructed. This is similar to Equation (7). They also presented a modified formulation suitable for the minimum graph coloring problem. Instead of only choosing whether to pick an asset or not, Rosenberg and Rounds [285] used binary variables to represent whether to hold long or short positions. This approach is known as long-short minimum risk parity optimization. The authors represented the problem by an Ising model, which is equivalent to a QUBO problem, as follows: where W is an N × N diagonal matrix with the entries of w ∈ [0, 1] N , such that w 1 = 1, on the diagonal; w, contains a fixed weight for each asset j indicating the proportion of the portfolio that should hold a long or short position in j. In both cases, the authors utilized quantum annealing (Section 6.1.1). Instead of just risk minimization, the problem can be reformulated to specify a desired fixed expected return, µ ∈ R, min 8) or to maximize the expected return, where x is a vector of Boolean decision variables, r ∈ R N contains the expected returns of the assets, Σ is the same as above, q > 0 is the risk level, and β ∈ N is the budget. In both cases, the Lagrangian of the problem, where the dual variables are used to encode user-defined penalties, can be mapped to a QUBO (Section 6.1). These formulations are known as mean-variance portfolio optimization problems [231] . More constraints can also be added to these problems. Hodson et al. [169] and Slate et al. [308] both utilized QAOA to solve a problem similar to Equation (9). They allowed for three possible decisions: long position, short position, or neither. In addition, they utilized mixing operators (Section 6.1.2) that accounted for constraints. Alternatively, Gilliam et al. [137] utilized an extension of Grover adaptive search (Section 6.1.5) to solve Equation (9) with a quadratic speedup over classical unstructured search. Rosenberg et al. [286] and Mugel et al. [247] solved dynamic versions of Equation (9), where portfolio decisions are made for multiple time steps. Additionally, an analysis of benchmarking quantum annealers for portfolio optimization can be found in a paper by Grant et al. [148] . Furthermore, an approach using reverse quantum annealing (Section 6.1.1) was proposed by Venturelli and Kondratyev (Section 8.2). Note that any of the methods from Section 6.1 can be applied whenever quantum annealing can. Currently, however, because quantum annealers possess more qubits than existing gate-based devices possess, more experimental results with larger problems have been obtained with annealers. problem that are convex. The original formulation of mean-variance portfolio optimization by Markowitz [231] was convex, and thus it did not contain integer constraints, as was included in the discussion above. Thus, if the binary variable constraints are relaxed, the optimization problem, represented by Equation (8), can be reformulated as a convex quadratic program with linear equality constraints with both a fixed desired return and budget (Equation 10): where w ∈ R N is the allocation vector. The ith entry of the vector w represents the proportion of the portfolio that should invest in asset i. This is in contrast to the combinatorial formulations where the result was a Boolean vector. p ∈ R N and r ∈ R N are the vectors of the assets' prices and returns, respectively; ξ ∈ R is the budget; and µ ∈ R is desired expected return. In contrast to Equation (8), Equation (10) admits a closed-form solution [193] . However, more problem-specific conic constraints and cost terms (assuming these terms are convex) can be added, increasing the complexity of the problem, in which case it can potentially be solved with more sophisticated algorithms (e.g., interior-point methods). As mentioned, there exist polynomial speedups provided by quantum computing for common cone programs (Section 6.2). Equation (10) with additional positivity constraints on the allocation vector can be represented as an SOCP and solved with quantum IPMs [193] . Considering the exact formulation in Equation (10), however, we might be able to obtain superpolynomial speedup if we relax the problem even further. Rebentrost and Lloyd [276] presented a relaxed problem where the goals were to sample from the solution and/or compute statistics. Their approach was to apply the method of Lagrange multipliers to Equation (10) and obtain a linear system. This system can be formulated as a quantum linear systems problem and solved with QLS methods (Section 4.5). The result of solving the QLSP (Section 4.5) is a normalized quantum state corresponding to the solution found with time complexity that is polylogarithmic in the system size, N . Thus, if only partial information about the solution is required, QLS methods potentially provide an exponential speedup, in N , over all known classical algorithms for this scenario, although the sparsity and well-conditioned matrix assumptions mentioned in Section 4.5 have to be met to obtain this speedup. A swap is a contract where two parties agree to exchange cash flows periodically for a specific term [284] . Common types of swaps are credit default swaps [313] , foreign exchange currency swaps, and interest rate swaps [174] . The simplest example is the fixed-to-floating interest rate swap, where two parties exchange the responsibility of paying fixed-interest rate and floating-interest rate payments based on a principal amount called the notional value [48] . Examples of potential reasons to enter into such a contract could be to hedge or take advantage of the opposite party's comparative advantage [48] . Once an agreement between the parties has been made, a clearing house converts an agreement between two parties into two separate agreements with the clearing house. The clearing house potentially has multiple swaps with multiple parties. In the case that multiple cash flows cancel, the clearing house would like to net all of the contracts that cancel into a new one for only the net flow [119] . This is to reduce the risk exposure associated with having multiple contracts. Rosenberg et al. [284] formulated the task of finding a set of "nettable" swaps associated with a single counterparty that maximizes the total notional value as a QUBO. This optimization problem can be solved in parallel for each subset of potentially nettable swaps. Since swaps can be netted for potentially many reasons, the set of combinations can grow significantly. The procedure proposed by the authors is as follows. First, partition the set of swaps associated with a counterparty into subsets that can be netted. Second, for a subset M in the partition, with N := |M|, construct the following QUBO: where x ∈ B N is a vector of decision variables indicating whether to include the swap in the netted subset, p ∈ R N is the vector of notional values associated with the swaps, and d ∈ {+1, −1} N indicates the direction of the fixed-rate interest payments: +1 if the clearing house pays the counterparty and −1 if the opposite is true. signifies how incompatible two potentially "nettable" swaps are, and α and β weigh the importance of the second and third terms. The combined goal of the three terms is to maximize the total notional value netted (first term), such that the notional values of the selected swaps cancel (second term) and the swaps are compatible (third term). The QUBO problems were solved by utilizing quantum annealing (Section 6.1.1). An arbitrage opportunity occurs when one can profit from buying and selling similar financial assets in different markets because of price differences [305] . Arbitrage is due to the lack of market equilibrium; acting on arbitrage opportunities should shift the market back into equilibrium [107] . Thus, it is to one's advantage to be able to identify these opportunities as quickly and efficiently as possible. An example is currency arbitrage, where one can buy and sell currencies in foreign exchange (FX) markets such that when one ends up holding the initial currency again, one has profited because of the differences in FX rates (conversion rates) [305] . One way to formulate the problem of arbitrage detection is to view the different assets as the vertices of a graph with weighted edges between them indicating the negative logarithm of the conversion rate. The problem then reduces to identifying negative cycles [287] , for which multiple classical algorithms exist [81] . However, the arbitrage opportunity detected is not necessarily the optimal one. Soon and Ye [310] formulated the task of identifying the optimal currency arbitrage opportunity in the graph structure mentioned above as a binary linear programming problem. Thus this problem is NP-hard (Section 6.1). Rosenberg [287] reformulated this as a QUBO and solved it using quantum annealing. Alternatively, this can be solved by utilizing the other techniques for combinatorial optimization from Section 6.1. The QUBO resulted from converting the constrained binary linear program of Soon and Ye into an unconstrained problem. In addition, Rosenberg provided an alternative model that allows for arbitrage decisions that revisit the same asset (i.e., the same vertex). By definition, this is not allowed in a cycle. Solving this formulation of the problem effectively consists of selecting the assets to buy or sell and the point in the arbitrage loop at which the buying or selling occurs. However, this results in a problem with significantly more binary variables [287] . Creditworthiness identification is an important problem in finance (Section 2.2.2). With large sums of money being loaned globally, it is important to be able to determine the key independent features that are influential in determining the creditworthiness of the requester. The task of identifying features that balance independence and influence can be expressed as a QUBO problem. The problem setup for this combinatorial feature selection task is as follows. Assume that from an initial set of N features, we want to select K to use. The data is cleaned and set up in a matrix U ∈ R M ×N where each column represents a feature and each row contains the values of the features for each of the M past credit recipients. The past decisions that were made regarding creditworthiness are represented in a column vector v ∈ B M . The goal is to find a set of columns in U that are most correlated with v but not with each other. The next step is to construct a quadratic objective function that is a convex combination of terms evaluating the influence and independence of the features. Milne et al. [238] utilized quantum annealing to solve the QUBO. Thus the constraint of selecting K features can be accounted for by adding a penalty term to the QUBO. However, the problem can be solved by using any of the techniques mentioned in Section 6.1. Classical or quantum machine learning algorithms (Section 7) can then be used to classify new applicants based on the features selected [238] . A financial network can viewed as a collection of entities whose valuations are interconnected to the valuations of other members of the network. The analysis of financial networks is important for being able to predict crises. Elliot et al. [122] developed a problem formulation for counting the number of intuitions that fail in a financial network. Their proposed problem was found to be NP-hard [162] . If solved, however, it allows for analyzing how small changes in the network propagate through a web of complex interdependencies, something that is difficult [147] with alternative techniques for crash prediction [124] . By modeling a financial system, mapping the system to a QUBO problem, and perturbing the system to find the ground state it reaches, we can determine the effect of small perturbations on the market. We briefly discuss the formulation of Elliott et al. [122] . First, suppose the financial network consists of N institutions and M assets, with prices p ∈ R M ≥0 , that the institutions can own. D ∈ R N ×M ≥0 contains the percentage of each asset owned by each institution. In addition, the institutions can have a stake, cross holdings, in each other represented by the matrix C ∈ R N ×N ≥0 . The diagonal entries of C are set to 0, and the self-ownership is represented by the diagonal matrixC such thatCjj := I − i Cij. The equity valuations, v, at equilibrium satisfy v =C(I − C) −1 (D p − b( v, p) ), (11) where b( v, p) is a term that decreases the equity value further if it falls past a certain threshold. This signifies a loss of investor confidence or the inability to pay operating costs, signifying failure [122] . Orús et al. [261] considered the problem of minimizing a quadratic function of the difference between the left and right sides of Equation (11) . This allowed for a variational method to be used. Thus the minimum is attained when Equation (11) is satisfied. They then formulated this as a QUBO. Thus, after perturbing the system, the new equilibrium can be found by solving the QUBO. The next step is to count the number of institutions that go into failure as a function of the perturbation to assess the stability. The valuations v are continuous variables that can be approximated by using finite binary expansions. This method converts the initial problem into a binary optimization problem. Orús et al. solved the QUBO using quantum annealing, and Fellner et al. [130] solved it with QAOA. However, the right side of Equation (11) contains terms that are not quadratic. Thus, the resulting diagonal observable encoding it is not two-local (more specifically, not Ising). Orús et al. utilized the approach of Chancellor et al. [74] to convert the higher-order terms into two-local terms, at the cost of extra variables. For the approach of Fellner et al. the Hamiltonian did not need to be reduced to two-local terms (Section 6.1.2). The field of machine learning has become a crucial part of various applications in the finance industry. Rich historical finance data and advances in machine learning make it possible, for example, to train sophisticated models to detect patterns in stock markets, find outliers and anomalies in financial transactions, automatically classify and categorize financial news, and optimize portfolios. Quantum algorithms for machine learning have exhibited speedups compared with their classical counterparts, achieved mostly by applying the quantum algorithms introduced in Section 4.5 as subroutines. The applicability of these algorithms may be limited, however, since most of them begin with first loading the data into a quantum system, which can require exponential time [1, 270] . This issue can be addressed in theory by having access to qRAM (Section 4.5); as of today, however, no concrete implementation exists. Other quantum algorithms such as VQAs (Section 4.7) consider another type of enhancement for machine learning: the quantum models might be able to generate correlations that are hard to represent classically [176, 233] . However, such models face many challenges, such as trainability, accuracy, and efficiency, that need to be addressed in order to maintain the hope of achieving quantum advantage when scaling up these near-term quantum devices [71] . Nevertheless, quantum machine learning shows great promise to find improvements and speedups to elevate the current capabilities vastly. Regression is the process of fitting a numeric function from the training data set. This process is often used to understand how the value changes when the attributes vary, and it is a key tool for economic forecasting. Typically, the problem is solved by applying least-squares fitting, where the goal is to find a continuous function that approximates a set of N data points {xi, yi}. The fit function is of the form where λ is a vector of parameters and fj(x) is function of x and can be nonlinear. The optimal parameters can be found by minimizing the least-squares error: where the N × M matrix F is defined through Fij = fj(xi). Wiebe et al. [341] proposed a quantum algorithm to solve this problem, where they encoded the optimal parameters of the model into the amplitudes of a quantum state and developed a quantum algorithm for estimating the quality of the least-squares fit by building on the HHL algorithm (Section 4.5). The algorithm consists of three subroutines: a quantum algorithm for performing the pseudoinverse of F , an algorithm that estimates the fitting quality, and an algorithm for learning the parameter λ. Later, Wang [334] proposed using the CKS method (Section 4.5) for matrix inversion, which has better dependence on the precision in the output than HHL has, and used amplitude estimation (Section 4.4) to estimate the optimal parameters. Kerenedis and Prakash developed algorithms utilizing QSVE (Section 4.5) to perform a coherent gradient descent for normal and weighted least-squares regression [189] . Moreover, quantum annealing (Section 6.1.1) has been used to solve least-squares regression when formulated as a QUBO (Section 6.1) [105] . A Gaussian process (GP) is another widely used model for regression in supervised machine learning. It has been used, for example, in predicting the price behavior of commodities in financial markets. Given a training set of N data points {xi, yi}, the goal is to model a latent function f (x) such that where noise ∼ N (0, σ 2 n ) is independent and identically distributed Gaussian noise. A practical implementation of the Gaussian process regression (GPR) model needs to compute the Cholesky decomposition and therefore requires O(N 3 ) running time. Zhao et al. [349] proposed using the HHL algorithm to speed up computations in GPR. By repeated sampling of the results of specific quantum measurements on the output states of the HHL algorithm, the mean predictor and the associated variance can be estimated with bounded error with potentially an exponential speedup over classical algorithms. Classification is the process of placing objects into predefined groups. This type of process is also called pattern recognition. This area of machine learning can be used effectively in risk management and large data processing when the group information is of particular interest, for example, in creditworthiness identification and fraud detection. In machine learning, support vector machines (SVMs) are supervised learning models used to analyze data for classification and regression analysis. Given M training data elements of the form {( xj, yj) : xj ∈ R N , yj = ±1}, the task for the SVM is to classify a vector into one of two classes; it finds a maximum margin hyperplane with normal vector w that separates the two classes. In addition, the margin is given by the two hyperplanes that are parallel and separated by the maximum possible distance 2/ w 2. Typically, we use the Lagrangian dual formulation to maximize the function where α = (α1, · · · , αM ) T , subject to the constraints M j=1 αj = 0 and yjαj ≥ 0. Here, we have a key quantity for the supervised machine learning problem [248] , the kernel matrix K jk = k( xj, x k ) = xj · x k . Solving the dual form involves evaluating the dot products using the kernel matrix and then finding the optimal parameters αj by quadratic programming, which takes O(M 3 ) in the non-sparse case. A quantum SVM was first proposed by Rebentrost et al. [277] , who showed that a quantum SVM can be implemented with O(log M N ) runtime in both training and classification stages, under the assumption that there are oracles for the training data that return quantum states | xj = 1 x j 2 N k=1 ( xj) k |k , the norms xj 2 and the labels yj are given, and the states are constructed by using qRAM [139, 218] . The core of this quantum classification algorithm is a non-sparse matrix exponentiation technique for efficiently performing a matrix inversion of the training data inner-product (kernel) matrix. Lloyd et al. [218] showed that, effectively, a low-rank approximation to kernel matrix is used due to the eigenvalue filtering procedure used by HHL [160] . As mentioned in Section 6.2, Kerenedis et al. proposed a quantum enhancement to second-order cone programming that was subsequently used to provide a small polynomial speedup to the training of the classical 1 SVM [195] . In addition, classical SVMs using quantumenhanced feature spaces to construct quantum kernels have been proposed by Havlíček et al. [161, 296] . The k-nearest neighbor technique is an algorithm that can be used for classification or regression. Given a data set, the algorithm assumes that data points closer together are more similar and uses distance calculations to group close points together and define a class based on the commonality of the nearby points. Classically, the algorithm works as follows: (1) determine the distance between the query example and each other data point in the set, (2) sort the data points in a list indexed from closest to the query example to farthest, and (3) choose the first k entries and, if regression, return the mean of the k labels and if classification, return the mode of k labels. The computationally expensive step is to compute the distance between elements in the data set. The quantum nearest-neighbor classification algorithm was proposed by Wiebe et al. [340] , who used the Euclidean distance and the inner product as distance metrics. The distance between two points is encoded in the amplitude of a quantum state. They used amplitude amplification (Section 4.2) to amplify the probability of creating a state to store the distance estimate in a register, without measuring it. They then applied the Dürr-Høyer algorithm (Section 6.1.5). Later, Ruan et al. proposed using the Hamming distance as the metric [288] . More recently, Basheer et al. have proposed using a quantum k maxima-finding algorithm to find the k-nearest neighbors and use the fidelity and inner product as measures of similarity [31] . Clustering, or cluster analysis, is an unsupervised machine learning task. It explores and discovers the grouping structure of the data. In finance, cluster analysis can be used to develop a trading approach that helps investors build a diversified portfolio. It can also be used to analyze different stocks such that the stocks with high correlations in returns fall into one basket. k-means clustering (also known as Lloyd's algorithm) [228] is an algorithm that clusters training examples into k clusters based on their distances to each of the k cluster centroids. The algorithm for k-means clustering is as follows: (1) choose initial values for the k centroids randomly or by some chosen method, (2) assign each vector to the closest centroid, and (3) recompute the centroids for each cluster. Then repeat steps (2) and (3) until the clusters converge. The problem of optimally clustering data is NP-hard, and thus finding the optimal clustering using this algorithm can be computationally expensive. Quantum computing can be leveraged to accelerate a single step of k-means. For each iteration, the quantum Lloyd's algorithm [218] estimates the distance to the centroids in O(M log(M N )), while classical algorithms take O(M 2 N ), where M is the number of data points and N is the dimension of the vector. Wiebe et al. [340] showed that, with the same technique they used for the quantum nearest-neighbor algorithm described in Section 7.2, a step for k-means can be performed by using a number of queries that scales as O(M √ k log(k)/ ). While a direct classical method requires O(kM N ), the quantum solution is substantially better if kN M . Kerenidis et al. [191] proposed q-means, which is the quantum equivalent of δ-k-means. The q-means algorithm has a running time that depends polylogarithmically on the number of data points. A NISQ version of k-means clustering using quantum computing has been proposed by Khan et al. [196] . Spectral clustering methods [255] have had great successes in clustering tasks but suffer from a fast-growing running time of O(N 3 ), where N is the number of points in the data set. These methods obtain a solution for clustering by using the principal eigenvectors of the data matrix or the Laplacian matrix. Daskin [103] employed QAE (Section 4.4) and QPE (Section 4.3) for spectral clustering. Apers et al. [22] proposed quantum algorithms using the graph Laplacian for machine learning applications, including spectral k-means clustering. More recently, Kerenidis and Landman [187] proposed a quantum algorithm to perform spectral clustering. They first created a quantum state corresponding to the projected Laplacian matrix, which is efficient assuming qRAM access, and then applied the q-means algorithm [191] . Both steps depend polynomially on the number of clusters k and polylogarithmically on the dimension of the input vectors. The primary linear technique for dimensionality reduction, principal component analysis, performs a linear mapping of the data to a lower-dimensional space in such a way that the variance of the data in the low-dimensional representation is maximized. It mathematically amounts to finding dominant eigenvalues and eigenvectors of a very large matrix. The standard context for PCA involves a data set with observations on M variables for each of N objects. The data defines an N × M data matrix X, in which the jth column is the vector xj of observations on the jth variable. We aim to find a linear combination of the columns of the matrix X with maximum variance. This process boils down to finding the largest eigenvalues and corresponding eigenvectors of the covariance matrix. Lloyd et al. [219] proposed quantum PCA. In addition, they showed that multiple copies of a quantum system with density matrix ρ can be used to construct the unitary transformation e −iρt , which leads to revealing the eigenvectors corresponding to the large eigenvalues in quantum form for an unknown low-rank density matrix. For data sets that are high dimensional, incomplete, and noisy, the extraction of information is generally challenging. Topological data analysis (TDA) is an approach to study qualitative features and analyze and explore the complex topological and geometric structures of data sets. Persistent homology (PH) is a method used in TDA to study qualitative features of data that persist across multiple scales. Lloyd et al. [220] proposed a quantum algorithm to compute the Betti numbers, that is, the numbers of connected components, holes, and voids in PH. Essentially, the algorithm operates by finding the eigenvectors and eigenvalues of the combinatorial Laplacian and estimating Betti numbers to all orders and to accuracy δ in time O(n 5 /δ). The most significant speedup is for dense clique complexes [156] . An improved version as well as a NISQ version of the quantum algorithm for PH has been proposed by Ubaru et al. [321] . Unsupervised generative modeling is at the forefront of deep learning research. The goal of generative modeling is to model the probability distribution of observed data and generate new samples accordingly. The quantum circuit Born machine (QCBM) [39] directly exploits the inherent probabilistic interpretation of quantum wave functions and represents a probability distribution using a quantum pure state instead of the thermal distribution. Liu et al. [216] developed an efficient gradient-based learning algorithm to train the QCBM. Numerical simulations suggest that in the task of learning the distribution, QCBM at least matches the performance of the restricted Boltzmann machine (Section 7.5.3) and demonstrates superior performance as the model scales [100] . QCBM is also used to model copulas [350] , a family of multivariate distributions with uniform marginals. Bayesian networks are probabilistic graphical models [205] representing random variables and conditional dependencies via a directed acyclic graph, where each edge corresponds to a conditional dependency and each node corresponds to a unique random variable. Bayesian inference on this graph has many applications, such as prediction, anomaly detection, diagnostics, reasoning, and decision-making with uncertainty. Despite the large complications, exact inference is #P-hard. Mapping this problem to a quantum Bayesian network seems plausible since quantum mechanics naturally describes a probabilistic distribution. Tucci [320] introduced the quantum Bayesian network as an analogue to classical Bayesian networks, using quantum complex amplitudes to represent the conditional probabilities in a classical Bayesian network. Borujeni et al. [53] proposed a procedure to design a quantum circuit to represent a generic discrete Bayesian network: (1) map each node in the Bayesian network to one or more qubits, (2) map conditional probabilities of each node to the probability amplitudes associated with qubit states, and (3) obtain the desired probability amplitudes through controlled rotation gates. In a later work they tested it on IBM quantum hardware [54] . The applications in finance include portfolio simulation [203] and decision-making modeling [244] . A Boltzmann machine is an undirected probabilistic graphical model (i.e., a Markov random field) [205] inspired by thermodynamics. Inference, classically, is usually performed by utilizing Markov chain Monte Carlo methods to sample from the model's equilibrium distribution (i.e., the Boltzmann distribution) [145] . Because of the intractability of the partition function in the general Boltzmann machine, the graph structure is typically bipartite, resulting in a restricted Boltzmann machine [168, 292] . These methods were more popular prior to the current neural-network-based deep learning revolution [145] . Quantum Boltzmann machines have been implemented by utilizing quantum annealing (Section 6.1.1) [20, 38, 113] . This is possible because of the annealer's ability to sample ground states of Ising models. In addition, a gate-based variational approach using VarQITE (Section 6.1.4) has been designed by Zoufal et al. [352] . Since quantum states are always normalized, this approach allows for implementing Boltzmann machines with more complex graph structures. Neural networks (NNs) have achieved many practical successes over the past decade, due to the novel computational power of GPUs. On the other side, the growing interest in quantum computing has led researchers to develop different variants of quantum neural networks (QNNs). However, many challenges arise in developing quantum analogues to the required components of NNs, such as the perceptron, an optimization algorithm, and a loss function. A few different approaches to QNNs have been developed; however, most follow the same steps [35] : initialize a network architecture; specify a learning task, and implement a training algorithm; and simulate the learning task. Current proposals for QNNs, discussed in the following subsections, involve a diverse collection of ideas with varying degrees of proximity to classical neural networks. Cao et al. [69] proposed the realization of a quantum neuron and demonstrated its applicability as a building block of quantum neural networks. Their approach uses repeat-until-success techniques for quantum gate synthesis. Kerenedis et al. developed a quantum analogue to the orthogonal feedforward neural network [214] ; the orthogonality of the weight matrix is naturally preserved by using the reconfigurable beam splitter gate [194] . In addition, quantum algorithms can potentially speed up the operations used by classical feedforward neural networks [11] . Another quantum feedfoward NN has been developed by Beer et al. [36] . A different proposed path to designing quantum analogues to feedforward NNs on near-term devices is by using parameterized quantum circuits (Section 4.7) [126, 161, 163, 197] . Various attempts have been made to analyze the expressive power of the circuit-based QNN [2, 176, 296, 297] . Interestingly, the circuit-based QNN can be viewed as a kernel method with a quantum-enhanced feature map [161, 176, 296] . Furthermore, analysis has been performed in the neural tangent kernel framework [179] to show the benefits of using a quantum-enhanced feature map [161] in conjunction with a classical NN [250] . Convolutional neural networks (CNNs) were originally developed by LeCun et al. [211] in the 1980s. Kerenidis et al. [192] proposed a quantum convolutional neural network as a shallow circuit, reproducing completely the classical CNN, by allowing nonlinearities and pooling operations. They used a new quantum tomography algorithm with ∞ norm guarantees and new applications of probabilistic sampling in the context of information processing. Quantum Fourier convolutional neural networks have been developed by Shen and Liu [303] . Cong et al. [96] developed circuit-based quantum analogues to CNNs. Generative adversarial networks (GANs) represent a powerful tool for classical machine learning: a generator tries to create statistics for data that mimic those of the true data set, while a discriminator tries to discriminate between the true and fake data. Lloyd and Weedbrook [217] introduced the notion of quantum generative adversarial networks (QuGANs), where the data consists either of quantum states or of classical data and the generator and discriminator are equipped with quantum information processors. They showed that when the data consists of samples of measurements made on high-dimensional spaces, quantum adversarial networks may exhibit an exponential advantage over classical adversarial networks. Graph neural networks have recently attracted a lot of attention from the machine learning community. Effective neural networks are designed by modeling interactions (edges) between objects of interest (nodes) and assigning features to nodes and edges. The approach exploits real-world interconnectedness in finance. Graph neural networks are particularly helpful for training low-dimensional vector space representations of nodes that encode both complex graph structure and node/edge features. Verdon et al. [331] introduced quantum graph neural networks (QGNNs). This is a new type of quantum neural network ansatz that takes advantage of the graph structure of quantum processes and makes them effective on distributed quantum systems. In addition, quantum graph recurrent neural networks [90] , quantum graph convolutional neural networks, and quantum spectral graph convolutional neural networks are introduced as special cases. A networked quantum system is described with an underlying graph G = (V, E) in which each node v corresponds to a quantum subsystem with its Hilbert space Hv such that the entire system forms a Hilbert space H = v∈V Hv. This can be extended to edges HE, and each node can represent more complicated quantum systems than a single qubit (e.g., several qubits or the entire part of a distributed quantum system). With this representation, the edges of a graph describe communication between vertex subspaces. The QGNN ansatz is a parameterized quantum circuit on a network, which consists of a sequence of different Hamiltonian evolutions, with the whole sequence repeated several times with variational parameterization. In this setting, the parameterized Hamiltonians have a graph topology of interactions. Reinforcement learning is an area of machine learning that considers how agents ought to take actions in an environment in order to maximize their reward. It has been applied to many financial applications, include pricing and hedging of contingent claims, investment and portfolio allocation, buying and selling of a portfolio of securities subject to transaction costs, market making, asset liability management, and optimization of tax consequences. The standard framework of reinforcement learning is based on discrete-time, finite-state Markov decision processes (MDPs). It comprises five components: {S, Ai, pij(a), ri,a, V, i, j ∈ S, a ∈ Ai}, where S is the state space; Ai is the action space for state i; pij(a) is the transition probability; r is the reward function defined as r : Γ → {−∞, +∞}, where Γ = {(i, a)|i ∈ S, a ∈ Ai}; and V is a criterion function or objective function. The MDP has a history of successive states and decisions, defined as hn = (s0, a0, s1, a1, · · · , sn−1, an−1, sn), and the policy π is a sequence π = (π0, π1, · · · ). When the history at n is hn, the agent will make a decision according to the probability distribution πn(·|hn) on As n . At each step t, the agent observes the state of the environment st and then chooses an action at. The agent will therefore receive a reward rt+1. The environment then will change to the next state st+1 under action at, and the agent will again observe and act. The goal of reinforcement learning is to learn a mapping from states to action that maximize the expected sum of discounted reward or, formally, to learn a policy π : S × i∈S Ai → [0, 1], such that where γ ∈ [0, 1] is a discount factor and π(s, a) is the probability of the agent to select action a at state s according to policy π. Dong et al. [116] proposed to represent the state set with a quantum superposition state; the eigenstate obtained by randomly observing the quantum state is the action. The probability of the eigen action is determined by the probability amplitude, which is updated in parallel according to the rewards. The probability of the "good" action is amplified by using iterations of Grover's algorithm (Section 4.1). In addition, they showed that the approach makes a good trade-off between exploration and exploitation using the probability amplitude and can speed up learning through quantum parallelism. Paparo et al. [265] showed that the computational complexity of a particular model, projective simulation, can be quadratically reduced. In addition, quantum neural networks (Section 7.6) and quantum Boltzmann machines (Section 7.5.3) have been used for approximate RL [78, 80, 101, 181, 222] . The field of quantum machine learning is still in its infancy, and many exciting developments are underway. We list here a few representative or potential financial applications. We also point readers to a recent survey on quantum machine learning for finance [268] . Anomaly detection is a critical task for financial intuitions, from both a financial point of view and a technological one. In finance, fraud detection (Section 2.2.3) is an extremely important form of anomaly detection. Some examples are identifying fraudulent credit card transactions and financial documents. Also important is anomaly detection on communication networks used by the financial industry [6] . These tasks are typically performed, classically, by utilizing techniques from statistical analysis and machine learning [338] . The machine learning technique commonly used is unsupervised deep learning [264] . Researchers have also proposed quantum versions of these approaches, such as quantum GANs (Section 7.6). Herr et al. [167] used parameterized quantum circuits to model the generator used in AnoGan [295] for anomaly detection. Quantum amplitude estimation (Section 4.4) was utilized to enhance density-estimation-based anomaly detection [155] . There also exist quantum variants of common clustering methods, such as k-means (Section 7.3), that could be useful for anomaly detection. Furthermore, quantum-enhanced kernels (Section 7.2) could be utilized for kernel-based clustering methods [345] . Quantum Boltzmann machines (Section 7.5.3) have also been used for fraud detection [352] . Natural language processing (NLP) has flourished over the past couple of years as a result of advancements in deep learning [263] . One important component of NLP is building language models [252] . Today, deep neural networks using transformers [325] are utilized to construct language models [111, 131] for a variety of tasks. However, there remain concerns such as bias [51] , energy consumption, and environmental impact [112] . Thus it makes sense to look to quantum computing for potentially better language models for NLP. The DisCoCat framework, developed by Coecke et al. [94] combines both meaning and grammar, something not previously done, into a single language model utilizing category theory. For grammar, they utilized pregroups [208] , sets with relaxed group axioms, to validate sentence grammar. Semantics is represented by a vector space model, similar to those commonly used in NLP [237] . However, DisCoCat also allows for higher-order tensors to distinguish different parts of speech. Coecke et al. noted that pregroups and vector spaces are asymmetric and symmetric strict monoidal categories, respectively. Thus, they can be represented by utilizing the powerful diagrammatic framework associated with monoidal categories [93] . The revelation made by this group [95] was that this same diagrammatic framework is utilized by categorical quantum mechanics [3] . Because of the similarity between the two, the diagrams can be represented by quantum circuits [95] utilizing the algebra of ZX-calculus [92] . Because of this connection and the use of tensor operations, the group argued that this natural language model is "quantum-native," meaning it is more natural to run the DisCoCat model on a quantum computer instead of a classical one [95] . Coecke et al. ran multiple experiments on NISQ devices for a variety of simple NLP tasks [106, 223, 235] . In addition, there are many ways in which QML models, such as QNNs (Section 7.6), could potentially enhance the capacity of existing natural language neural architectures [32, 79, 316] . The fundamental goal of asset pricing is to understand the behavior of risk premiums. Classically, machine learning has shown great potential to improve our empirical understanding of asset returns. For example, Gu et al. [154] studied the problem of predicting returns in cross-section and time series. They studied a collection of machine learning methods, including linear regression, generalized linear models, principal component regression, and neural networks. Chen et al. [76] proposed an asset pricing model for individual stock returns using deep neural networks. The quantum version of these machine learning methods could potentially be applied to asset pricing, and investigations of the applicability are needed. For example, Bausch [32] proposed a quantum recurrent neural network. Chen et al. [79] proposed a quantum version of the long short-term memory architecture, which is a variant of the recurrent neural network that has been demonstrated to be effective for temporal and sequence data modeling. Both make use of parameterized quantum circuits, and these QML models may offer an advantage in terms of training complexity and prediction performance over classical models. Implied volatility is a metric that captures the market's forecast of a likely movement in the price of a security. It is one of the deciding factors in the pricing of options. Sakuma [291] investigated the use of the deep quantum neural network proposed by Beer et al. [35] in the context of learning implied volatility. Numerical results suggest that such a QML model is a promising candidate for developing powerful methods in finance. In this section we discuss experiments that were performed on quantum hardware to solve financial problems using some of the methods that have been presented. The hardware platforms include superconducting and trapped-ion universal quantum processors, as well as superconducting quantum annealers. In this section we present the experimental results collected by Woerner and Egger [342] , using their method for risk analysis mentioned in Section 5.1.2. They utilized the IBM transmon [206] quantum processors [342] . The authors tested the algorithm on two toy models: pricing a Treasury-bill, executed on quantum hardware, and determining the financial risk of a two-asset portfolio, performed in simulations that included modeling quantum noise on classical hardware. The QAE algorithm is used to estimate quantities related to a random variable represented as a quantum state, usually using n qubits to map it to the interval {0, ..., N − 1}, with N = 2 n being the number of possible realizations. The authors presented a method for efficiently applying an approximation to f (X), where QAE computes E[f (X)], given that f (x) = sin 2 (p(x)) for some polynomial p. They used this approach to compute statistical moments, such as the mean. The f (X) used for VaR, and similarly CVaR, requires using a quantum comparator, as mentioned in Section 5.1.2. In addition, for computing VaR, a bisection search is then performed utilizing multiple executions of QAE to find a threshold x such that E[f (X)] = P[X ≤ x] ≥ α, for the chosen confidence α. For the comparator, the authors referenced the quantum ripple-carry adder of Cuccaro et al. [102] . For each of these methods, they discussed the trade-off between circuit depth, qubit count, and convergence rate. An example circuit for 3 evaluation qubits, corresponding to 2 3 = 8 samples (i.e., quantum queries, Section 5), is shown in Figure 1 for computing the value of a T-Bill. Increasing the number of evaluation qubits above 3 yields smaller estimation errors than with classical Monte Carlo integration (Section 5), demonstrating the predicted quantum speedup; see Figure 2 . Despite the observed speedup of the QAE-based algorithm, compared with classical Monte Carlo integration, practical quantum advantage cannot be achieved using this algorithm on NISQ devices because of the relatively small qubit count, qubit connectivity and quality, and decoherence limitations present in modern quantum processors (Sections 3.3 and 5.2). Additionally, Monte Carlo for numerical integration can be efficiently parallelized, imposing even stronger requirements on quantum algorithms in order to outperform standard classical methods. As explained in Section 6.4.1, quantum annealing (Section 6.1.1) is highly amenable to solving portfolio optimization problems. The commercially available D-Wave flux-qubit-based [206] quantum annealers have been successfully used to solve portfolio optimization instances. Determining the optimal portfolio composition is one of the most studied optimization problems in finance. A typical portfolio optimization problem can be cast into a QUBO or, equivalently, an Ising Hamiltonian suitable for a quantum annealer; see Section 6.1. The following presents the work of Venturelli and Kondratyev [328] , who used a reverse-quantum-annealing-based approach. Portfolio optimization was defined in Section 2.2.2. The pairwise relationships between the asset choices give rise to the quadratic form of the objective function to be optimized. The following technical challenges need to be overcome in order to perform simulations on the D-Wave quantum annealer. First, the standard way of enforcing the number of selected assets by introducing a high-energy penalty term can be problematic with D-Wave devices because of precision issues and the physical limit on energy scales that can be programmed on the chip. The authors observed that this constraint can be enforced by artificially shifting the Sharpe ratios, used when computing the risk-adjusted returns of the assets, by a fixed amount. Second, embedding the Ising optimization problem in the Chimera topology of D-Wave quantum annealers presents a strong limitation on the problems that can be solved on the device (Section 6.1.1). To overcome this limitation, the minor-embedding compilation technique for fully connected graphs can be employed [52, 329] , allowing the necessary embedding to be performed using an extended set of variables. Energy scale a b c Figure 3 : Forward (a) and reverse (b) quantum annealing protocols. During reverse quantum annealing, the system is initialized in a classical state (c, left), evolved following a "backward" annealing schedule to some quantum superposition state. At that point the evolution is interrupted for a fixed amount of time, allowing the system to perform a global search (c, middle). The system's evolution then is completed by the forward annealing schedule (c, right). Adapted from [198] . The novelty of this work by Venturelli and Kondratyev is the use of the reverse annealing protocol [198] , which was found to be on average 100+ times faster than forward annealing, as measured by the time to solution (TTS) metric. Reverse quantum annealing, briefly mentioned in Section 6.1.1, is illustrated in Figure 3 . It is a new approach enabled by the latest D-Wave architecture designed to help escape local minima through a combination of quantum and thermal fluctuations. The annealing schedule starts in some classical state provided by the user, as opposed to the quantum superposition state, as in the case of a forward annealing. The transverse field is then increased, driving backward annealing from the classical initial state to some intermediate quantum superposition state. The annealing schedule functions A(t) and B(t) (Section 6.1.1) are then kept constant for a fixed amount of time, allowing the system to perform the search of a global minimum. The reverse annealing schedule is completed by removing the transverse field, effectively performing the forward annealing protocol with the system evolving to the ground state of the problem Hamiltonian. The DW2000Q device used in [328] allows embedding up to 64 logical binary variables on a fully connected graph, and the authors considered portfolio optimization problems with 24-60 assets, with 30 randomly generated instances for each size, and calculating the TTS. The results for both forward and reverse quantum annealing were then compared with the industry-established genetic algorithm (GA) approach used as a classical benchmark heuristic. The TTS for reverse annealing with the shortest annealing and pause times was found to be several orders of magnitude smaller than for the forward annealing and GA approaches, demonstrating a promising step toward practical applications of NISQ devices. As discussed in Section 6.4.1, the convex mean-variance portfolio optimization problem with linear equality constraints can be solved as a linear system of equations. The method of Lagrange multipliers turns the convex quadratic program (Equation (10) where η, θ are the dual variables. This lends itself to quantum linear systems algorithms (Section 4.5). To reiterate, solving a QLSP consists of returning a quantum state, with bounded error, that is proportional to the solution of a linear system A x = b. Quantum algorithms for this problem on sparse and well-conditioned matrices potentially achieve an exponential speedup in the system size N . Although a QLS algorithm cannot provide classical access to amplitudes of the solution state while maintaining the exponential speedup in N , potentially useful financial statistics can be obtained from the state. The QLS algorithm discussed in this section is HHL. This algorithm is typically viewed as one that is intended for the fault-tolerant era of quantum computation. Two of the main components of the algorithm that contribute to this constraint are quantum phase estimation (Section 4.3) and the loading of the inverse of the eigenvalue estimations onto the amplitudes of an ancilla, called eigenvalue inversion [160] . 13 Yalovetzky et al. [346] developed an algorithm called NISQ-HHL with the goal of applying it to portfolio optimization problems on NISQ devices. While it does not retain the asymptotic speedup of HHL, the authors' approach can potentially provide benefits in practice. They applied their algorithm to small mean-variance portfolio optimization problems and executed them on the trapped-ion Quantinuum System Model H1 device [267] . QPE is typically difficult to run on near-term devices because of the large number of ancillary qubits and controlled operations required for the desired eigenvalue precision (depth scales as O( 1 δ ) for precision δ). An alternative realization of QPE reduces the number of ancillary qubits to one, and replaces all controlled-phase gates in the quantum Fourier transform component of QPE with classically controlled single-qubit gates [33] . This method requires mid-circuit measurements, ground-state resets, and quantum conditional logic (QCL). These features are available on the Quantinuum System Model H1 device. The paper calls this method QCL-QPE. The differences between these two QPE methods are highlighted in Figure 4 . Figure 4 : Circuits for estimating the eigenvalues of the unitary operator U to three bits using QPE (left circuit) or QCL-QPE (right circuit). S is the register that U is applied to, and j is a classical register; H refers to the Hadamard gate; and R k , for k = 2, 3, are the standard phase gates used by the quantum Fourier transform [256] . Adapted from [346] . With regard to eigenvalue inversion, implementing the controlled rotations on near-term devices typically requires a multicontrolled rotation for every possible eigenvalue that can be estimated by QPE, such as the uniformly controlled rotation gate [246] . This circuit is exponentially deep in the number of qubits. Faulttolerant implementations can take advantage of quantum arithmetic methods to implement efficient algorithms for the required arcsin computation. An alternative approach for the near term is a hybrid version of HHL [212] , which the developers of NISQ-HHL extended. This approach utilizes QPE to first estimate the required eigenvalues to perform the rotations for. While the number of controlled rotations is now reduced, this approach does not retain the asymptotic speedup provided by HHL because of the loss of coherence. A novel procedure was developed by the authors for scaling the matrix by a factor γ to make it easier to distinguish the eigenvalues in the output distribution of QPE and improve performance. Figure 5 displays experimental results collected from the Quantinuum H1 device to show the effect of the scaling procedure. 14 These methods were executed on the Quantinuum H1 device for a few small portfolios (Table 2) . A controlled-SWAP test [34] was performed between the state from HHL and the solution to the linear system obtained classically (loaded onto a quantum state) [346] . The number of H1 two-qubit ZZMax gates used is also displayed in Table 2 . 13 Hamiltonian simulation also contributes to this constraint. 14 Let Π λ denote the eigenspace projector for an eigenvalue λ of the Hermitian matrix in Equation (12) , and let |b be a quantum state proportional to the right side of Equation (12) . To clarify, the y-axis value of the red dot corresponding to λ, on either of the histograms in Figure 5 , is Π λ |b 2 2 . Figure 5 : Probability distributions over the eigenvalue estimations from the QCL-QPE run using γ = 50 (left) and γ = 100 (right). The blue bars represent the experimental results on the H1 machine-1, 000 shots (left) and 2, 000 shots (right)-and we compare them with the theoretical eigenvalues (classically calculated using a numerical solver) represented with the red dots and the results from the Qiskit [9] QASM simulator represented with the orange dots. Adapted from [346] . Table 2 : Comparison of the number of rotations in the eigenvalue inversion, ZZMax depth of the HHL plus the controlled-SWAP test circuits and the inner product calculated from the Qiskit statevector simulator and the Quantinuum H1 results with 3, 000 shots. Adapted from [346] . Statistics such as risk can be computed efficiently from the quantum state. Sampling can also be performed if the solution state is sparse enough [276] . Quantum computers are expected to surpass the computational capabilities of classical computers and achieve disruptive impact on numerous industry sectors, such as global energy and materials, pharmaceuticals and medical products, telecommunication, travel and logistics, and finance. In particular, the finance sector has a history of creation and first adaptation of new technologies. This is true also when it comes to quantum computing. In fact, finance is estimated to be the first industry sector to benefit from quantum computing, not only in the medium and long terms, but even in the short term, because of the large number of financial use cases that lend themselves to quantum computing and their amenability to be solved effectively even in the presence of approximations. A common misconception about quantum computing is that quantum computers are simply faster processors and hence speedups will be automatically achieved by porting a classical algorithm from a classical computer to a quantum one-just like moving a program to a system-upgraded hardware. In reality, quantum computers are fundamentally different from classical computers, so much so that the algorithmic process for solving an application has to be completely redesigned based on the architecture of the underlying quantum hardware. Most of the use cases that characterize the finance industry sector have high computational complexity, and for this reason they lend themselves to quantum computing. However, the computational complexity of a problem per se is not a guarantee that quantum computing can make a difference. In order to assess commercial viability, the first question when adapting a classical finance application to quantum computing is whether that application can achieve quantum speedup and, if so, when that will happen based on the estimated evolution of the underlying quantum hardware. Today's quantum computers are not yet capable of solving real-life-scale problems in the industry more efficiently and more accurately than classical computers can. Nevertheless, proofs of concept for quantum advantage and even quantum supremacy have already been provided (see Section 3). There is hope that the situation will change in the coming years with demonstrations of quantum advantage. It is crucial for enterprises, and particularly financial institutions, to use the current time to become quantum ready in order to avoid being left behind when quantum computers become operational in a production environment. We are in the early stages of the quantum revolution, yet we are already observing a strong potential for quantum technology to transform the financial industry. Right now, the first applications that are expected to benefit from quantum advantage are portfolio optimization, derivatives pricing, risk analysis, and several problems in the realm of artificial intelligence and machine learning, such as fraud detection and NLP. In this paper we have provided a comprehensive review of quantum computing for finance, specifically focusing on quantum algorithms that can solve computationally challenging financial problems. We have described the current landscape of the state of the industry. We hope this work will be used by the scientific community, both in the industry and academia, not only as a reference, but also as a source of information to identify new opportunities and advance the state of the art. Dylan Herman, Cody Ning Googin, and Xiaoyuan Liu contributed equally to the manuscript. All authors contributed to the writing and reviewed and approved the final manuscript. Read the Fine Print The Power of Quantum Neural Networks Categorical quantum mechanics. Handbook of quantum logic and quantum structures Quantum walks on graphs Adiabatic quantum computation is equivalent to standard quantum computation A survey of network anomaly detection techniques Quantum computing based hybrid solution strategies for large-scale discrete-continuous optimization problems Adiabatic quantum computation Qiskit: An open-source framework for quantum computing A variational quantum algorithm for the feynman-kac formula Quantum algorithms for feedforward neural networks Filtering variational quantum algorithms for combinatorial optimization Quantum walk algorithm for element distinctness Quantum search with variable times. Theory of Computing Systems Variable time amplitude amplification and quantum algorithms for linear algebra problems Understanding quantum algorithms via query complexity One-dimensional quantum walks Coins make quantum walks faster Quadratic speedup for finding marked vertices by quantum walks Quantum Boltzmann machine Estimating the cost of generic quantum pre-image attacks on SHA-2 and SHA-3 Quantum speedup for graph sparsification, cut approximation and Laplacian solving A unified framework of quantum walk search Computational complexity: a modern approach A combinatorial, primal-dual approach to semidefinite programs Quantum supremacy using a programmable superconducting processor Focus beyond quadratic speedups for error-corrected quantum advantage Cybersecurity: Emerging challenges and solutions for the boards of financial-services companies Grover's quantum algorithm applied to global optimization Normal inverse Gaussian distributions and stochastic volatility modelling Quantum k-nearest neighbors algorithm Recurrent quantum neural networks Circuit for Shor's algorithm using 2n+3 qubits Tests of quantum information Training deep quantum neural networks Training deep quantum neural networks A mixed integer linear programming formulation of the optimal mean/value-at-risk portfolio problem Estimation of effective temperatures in quantum annealers for sampling applications: A case study with possible applications in deep learning A generative modeling approach for benchmarking and training shallow quantum circuits. npj Quantum Information Hardware-efficient variational quantum algorithms for time evolution Strengths and weaknesses of quantum computing Post-quantum cryptography Efficient quantum algorithms for simulating sparse Hamiltonians Exponential improvement in precision for simulating sparse Hamiltonians Hamiltonian simulation with nearly optimal dependence on all parameters Realizable Hamiltonians for universal adiabatic quantum computers An economic analysis of interest rate swaps Training variational quantum algorithms is NP-hard The pricing of options and corporate liabilities Language (technology) is power: A critical survey of "bias" in NLP Fast clique minor generation in Chimera qubit connectivity graphs Quantum circuit representation of Bayesian networks Experimental evaluation of quantum bayesian networks on ibm qx hardware Iordanis Kerenidis, and Anupam Prakash. Prospects and challenges of quantum finance Convex Optimization Tight bounds on quantum searching Options: A monte carlo approach Quantum algorithms for mixed binary optimization applied to transaction settlement Quantum SDP solvers: Large speed-ups, optimality, and applications to quantum learning Quantum speed-ups for solving semidefinite programs Quantum amplitude amplification and estimation. Quantum Computation and Information Variational quantum linear solver Mitigating measurement errors in multiqubit experiments Trapped-ion quantum computing: Progress and challenges Continuous variable quantum algorithms: an introduction Implementing pure adaptive search with Grover's quantum algorithm A practical heuristic for finding graph minors Quantum neuron: an elementary building block for machine learning on quantum computers Quantum Monte Carlo Variational quantum algorithms A threshold for quantum advantage in derivative pricing The power of block-encoded matrix powers: improved regression techniques via faster Hamiltonian simulation Circuit design for multi-body interactions in superconducting quantum annealing systems with applications to a scalable architecture Finding maximum cliques on the D-Wave quantum annealer Deep learning in asset pricing Demonstration of adiabatic variational quantum computing with a superconducting quantum coprocessor Variational quantum circuits for deep reinforcement learning Quantum long short-term memory Variational quantum reinforcement learning via evolutionary optimization Negative-cycle detection algorithms Samplingbased sublinear low-rank matrix arithmetic framework for dequantizing quantum machine learning On the relationship between continuous-and discrete-time quantum walk Quantum algorithms for subset finding Quantum algorithms for algebraic problems An example of the difference between quantum and classical random walks. Quantum Information Processing Exponential algorithmic speedup by a quantum walk Quantum algorithm for systems of linear equations with exponentially improved dependence on precision Theory of trotter error with commutator scaling A tutorial on quantum graph recurrent neural network (QGRNN) Quantum algorithms revisited Interacting quantum observables: categorical algebra and diagrammatics Categories for the practising physicist Mathematical foundations for a compositional distributional model of meaning Foundations for near-term quantum natural language processing Quantum convolutional neural networks Introduction to algorithms Exceptional paper-location of bank accounts to optimize float: An analytic study of exact and approximate algorithms Optimal scaling quantum linear systems solver via discrete adiabatic theorem Quantum versus classical generative modelling in finance Reinforcement learning using quantum Boltzmann machines A new quantum ripple-carry addition circuit Quantum spectral clustering through a biased phase estimation algorithm Statistical and machine learning models in credit scoring: A systematic literature survey Adiabatic quantum linear regression DisCoPy: Monoidal categories in Python The mathematics of arbitrage A Generalized Approach to Portfolio Optimization: Improving Performance by Constraining Portfolio Norms What is the computational value of finite-range tunneling? Quantum linear systems algorithms: a primer BERT: Pre-training of deep bidirectional transformers for language understanding The carbon impact of artificial intelligence Training restricted Boltzmann machines with a d-wave quantum annealer Product Integration with Application to Differential Equations. Encyclopedia of Mathematics and its Applications Quadratic and higher-order unconstrained binary optimization of railway dispatching problem for quantum computing Quantum reinforcement learning. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) Discrete time adiabatic theorems for quantum mechanical systems Parity quantum optimization: Encoding constraints Swap rates and credit quality A quantum algorithm for finding the minimum Credit risk analysis using quantum computers Financial networks and contagion Parity quantum optimization: Compiler Predicting U.S. recessions: Financial variables as leading indicators. The Review of Economics and Statistics Quantum computation and decision trees Classification with quantum neural networks on near term processors Quantum computation by adiabatic evolution Quantum adiabatic evolution algorithms versus simulated annealing A quantum approximate optimization algorithm Parity quantum optimization: Benchmarks GPT-3: Its nature, scope, limits, and consequences. Minds and Machines Surface codes: Towards practical largescale quantum computation Quantifying the effects of local many-qubit errors and nonlocal twoqubit errors on the surface code Fast Monte-Carlo algorithms for finding low-rank approximations Transferability of optimal QAOA parameters between random graphs How to factor 2048 bit RSA integers in 8 hours using 20 million noisy qubits. Quantum Grover adaptive search for constrained polynomial binary optimization Quantum singular value transformation and beyond: exponential improvements for quantum matrix arithmetics Quantum random access memory Quantum cryptography Low depth algorithms for quantum amplitude estimation Low depth amplitude estimation on a trapped ion quantum computer Monte Carlo methods in financial engineering Adaptive variational quantum imaginary time evolution approach for ground state preparation Deep learning Optimizing adiabatic quantum program compilation using a graph-theoretic framework Predicting financial crisis in developing economies: astronomy or astrology? Eastern Economic Benchmarking quantum annealing controls with portfolio optimization Improving quantum linear system solvers via a gradient descent perspective Iterative quantum amplitude estimation Creating superpositions that correspond to efficiently integrable probability distributions A fast quantum mechanical algorithm for database search Fixed-point quantum search Empirical asset pricing via machine learning Quantum algorithms for anomaly detection using amplitude estimation Towards quantum advantage via topological data analysis From the quantum approximate optimization algorithm to a quantum alternating operator ansatz Optimizing quantum circuits for arithmetic Small quantum computers and large classical data sets Quantum algorithm for linear systems of equations Supervised learning with quantum-enhanced feature spaces Sensitivity and computational complexity in financial networks Quanvolutional neural networks: powering image recognition with quantum circuits Quantum computing with neutral atoms No quantum speedup with grover-rudolph state preparation for quantum Monte Carlo integration Quantum monte-carlo integration: The full advantage in minimal circuit depth Anomaly detection with variational quantum generative adversarial networks Training products of experts by minimizing contrastive divergence Portfolio rebalancing experiments using the quantum alternating operator ansatz Monte Carlo sampling-based methods for stochastic optimization Monte carlo methods for value-at-risk and conditional value-at-risk: A review Quantum collision attacks on reduced SHA-256 and SHA-512 Algorithmics for hard problems: introduction to combinatorial optimization, randomization, approximation, and heuristics Swaps, the modern process of financial innovation and the vulnerability of a regulatory paradigm Near-term quantum algorithms for linear systems of equations with regression loss functions Power of data in quantum machine learning Characterizing the loss landscape of variational quantum circuits Quantum search on bounded-error inputs Neural tangent kernel: Convergence and generalization in neural networks Implementing Grover oracles for quantum key search on AES and LowMC Quantum enhancements for deep reinforcement learning in large spaces Quantum annealing in the transverse Ising model Portfolio asset identification using graph algorithms on a quantum annealer Open-system quantum annealing in mean-field models with exponential degeneracy Quantum random walks: An introductory overview Quantum random walks hit exponentially faster Quantum spectral clustering Quantum recommendation systems Quantum gradient descent for linear systems and least squares A quantum interior point method for LPs and SDPs q-means: A quantum algorithm for unsupervised machine learning Quantum algorithms for deep convolutional neural networks Quantum algorithms for portfolio optimization Classical and quantum algorithms for orthogonal neural networks Quantum algorithms for second-order cone programming and support vector machines Ahsan Javed Awan, and Gemma Vall-Llosera. K-means clustering on noisy intermediate scale quantum computers Continuous-variable quantum neural networks Quantum-assisted genetic algorithm Quantum measurements and the Abelian stabilizer problem. arXiv preprint quantph/9511026 Fault-tolerant quantum computation by anyons Classical and Quantum Computation Introduction to Stochastic Calculus with Applications Chapter 12 -The Schrödinger equation as inspiration for a client portfolio simulation hybrid system based on dynamic Bayesian networks and the REFII model The unconstrained binary quadratic programming problem: a survey Probabilistic Graphical Models: Principles and Techniques A quantum engineer's guide to superconducting qubits CMOS limitations and futuristic carbon allotropes The mathematics of sentence structure Random Walk: A Modern Introduction. Cambridge Studies in Advanced Mathematics A quantum annealing architecture with all-to-all connectivity from local interactions Gradient-based learning applied to document recognition Hybrid quantum linear equation algorithm and its experimental test on IBM quantum experience On default correlation: A copula function approach Orthogonal deep neural networks Quantum Error Correction Differentiable learning of quantum circuit Born machines Quantum generative adversarial learning Quantum algorithms for supervised and unsupervised machine learning Quantum principal component analysis Quantum algorithms for topological and geometric analysis of data Minor embedding in broken chimera and pegasus graphs is np-complete Reinforcement learning with quantum variational circuits QNLP in practice: Running compositional models of meaning on a quantum computer Random Walks on Graphs: A Survey Optimal Hamiltonian simulation by quantum signal processing Variational quantum algorithms for nonlinear problems Ising formulations of many NP problems Some methods for classification and analysis of multivariate observations Search via quantum walk Linear and mixed integer programming for portfolio optimization Portfolio selection Variational ansatz-based quantum simulation of imaginary time evolution The theory of variational hybrid quantum-classical algorithms A variational solution of the time-dependent Schrödinger equation Grammar-aware questionanswering on quantum computers A game plan for quantum computing Efficient estimation of word representations in vector space Optimal feature selection in credit scoring and classification using a quantum annealer Quantum circuit learning Methodology for replacing indirect measurements with direct measurements Quantum speedup of Monte Carlo methods Polynomial convergence of primal-dual algorithms for the secondorder cone program based on the MZ-family of directions. Mathematical programming Quantum walks on the hypercube Quantum-like bayesian networks for modeling decision making Determining eigenstates and thermal states on a quantum computer using quantum imaginary time evolution Transformation of quantum states using uniformly controlled rotations Dynamic portfolio optimization with real datasets using quantum processors and quantum-inspired tensor networks An introduction to kernel-based learning algorithms The Geometry of Minkowski Spacetime: An Introduction to the Mathematics of the Special Theory of Relativity Quantum-enhanced neural networks in the neural tangent kernel framework Fast quantum subroutines for the simplex method A comprehensive survey on word representation models: From classical to state-of-the-art word representation language models Probabilistic inference using Markov chain Monte Carlo methods Interior-point polynomial algorithms in convex programming On spectral clustering: Analysis and an algorithm Quantum Computation and Quantum Information Markov Chains. Cambridge Series in Statistical and Probabilistic Mathematics Breaking limitation of quantum annealer in solving optimization problems under constraints Efficient partition of integer optimization problems with one-hot encoding Basel Committee on Banking Supervision. High-level summary of Basel III reforms Forecasting financial crashes with quantum computing Axioms for Euclidean Green's functions A survey of the usages of deep learning for natural language processing Longbing Cao, and Anton Van Den Hengel. Deep learning for anomaly detection: A review Quantum speedup for active learning agents A variational eigenvalue solver on a photonic quantum processor Demonstration of the trapped-ion quantum CCD computer architecture Quantum Machine Learning for Finance Variational quantum amplitude estimation Quantum-state preparation with universal gate decompositions Quantum algorithms for linear algebra and machine learning Quantum computing and the entanglement frontier Quantum computing in the NISQ era and beyond. Quantum A Domainagnostic, Noise-resistant, Hardware-efficient Evolutionary Variational Quantum Eigensolver The efficient preparation of normal distributions in quantum registers. Quantum, 5:609 Quantum computational finance: quantum algorithm for portfolio optimization Quantum support vector machine for big data classification Quantum computational finance: Monte Carlo pricing of financial derivatives Monte Carlo statistical methods Quantum resource estimates for computing elliptic curve discrete logarithms Quantum search by local adiabatic evolution Advanced linear algebra Swap netting using a quantum annealer Long-short minimum risk parity optimization using a quantum or digital annealer Solving the optimal trading trajectory problem using a quantum annealer Finding optimal arbitrage opportunities using a quantum annealer Quantum algorithm for k-nearest neighbors classification based on the metric of Hamming distance Quantum approximate algorithm for np optimization problems with constraints Quantum information with Rydberg atoms Application of deep quantum neural networks to finance Deep Boltzmann machines Detecting crosstalk errors in quantum information processors Unsupervised anomaly detection with generative adversarial networks to guide marker discovery Supervised quantum machine learning models are kernel methods Effect of data encoding on the expressive power of variational quantum-machine-learning models Principles of quantum mechanics Multistart methods for quantum approximate optimization A hybrid approach for solving optimization problems on small quantum computers Network community detection on small quantum computers Classical symmetries and the quantum approximate optimization algorithm Qfcnn: Quantum fourier convolutional neural network Quantum random-walk search algorithm The limits of arbitrage Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer Pulser: An open-source package for the design of pulse sequences in programmable neutral-atom arrays Quantum walk-based portfolio optimisation Quantum simulations of classical annealing processes Currency arbitrage detection using a binary integer programming model Option pricing using quantum computers Training the quantum approximate optimization algorithm without access to a quantum processing unit Credit default swaps and the credit crisis Amplitude estimation without phase estimation Quantum speed-up of Markov chain based algorithms Learning temporal data with a variational quantum recurrent neural network A quantum-inspired classical algorithm for recommendation systems Quantum computation for pricing the collateralized debt obligations The socialization of finance Quantum Bayesian nets Quantum topological data analysis with linear depth and exponential speedup Multilevel combinatorial optimization across quantum architectures Quantum SDP-solvers: Better upper and lower bounds. Quantum Simulated annealing Attention is all you need Projective geometry Quantum walks: a comprehensive review Reverse quantum annealing approach to portfolio optimization problems Quantum optimization of fully connected spin glasses Impact of ionizing radiation on superconducting qubit coherence Quantum graph neural networks A comparative study of universal quantum computing models: towards a physical unification a comparative study of universal quantum computing models: towards a physical unification A comparative assessment of ensemble learning for credit scoring Quantum algorithm for linear regression Qudits and high-dimensional quantum computing Introduction to monte carlo methods High fidelity quantum gates via dynamical decoupling Intelligent financial fraud detection: A comprehensive review Properties of Bethe-Salpeter wave functions Quantum algorithms for nearest-neighbor methods for supervised and unsupervised learning Quantum algorithm for data fitting Quantum risk analysis. npj Quantum Information Integer and combinatorial optimization Quantum linear system algorithm for dense matrices A comprehensive survey of clustering algorithms NISQ-HHL: Portfolio optimization for near-term quantum hardware Exploring quantum computing use cases for financial services Theory of variational quantum simulation Quantum-assisted Gaussian process regression Generative quantum learning of joint probability distribution functions Quantum generative adversarial networks for learning and loading random distributions. npj Quantum Information Variational quantum Boltzmann machines The authors declare that they have no competing interests. This paper was prepared for information purposes by the teams of researchers from the various institutions identified above, including the Future Lab for Applied Research and Engineering (FLARE) group of JPMorgan Chase Bank, N.A.. This paper is not a product of the Research Department of JPMorgan Chase & Co. or its affiliates. Neither JPMorgan Chase & Co. nor any of its affiliates make any explicit or implied representation or warranty and none of them accept any liability in connection with this paper, including, but limited to, the completeness, accuracy, reliability of information contained herein and the potential legal, compliance, tax or accounting effects thereof. This document is not intended as investment research or investment advice, or a recommendation, offer or solicitation for the purchase or sale of any security, financial instrument, financial product or service, or to be used in any way for evaluating the merits of participating in any transaction.