AWS Quantum Technologies Blog

JPMorgan Chase and AWS study the prospects for quantum speedups with near-term Rydberg atom arrays

This post was contributed by Martin Schuetz, Ruben Andrist, Grant Salton, and Helmut Katzgraber from the Amazon Quantum Solutions Lab, and Pierre Minssen, Romina Yalovetzky, Shouvanik Chakrabarti, Dylan Herman, Niraj Kumar, Ruslan Shaydulin, Yue Sun, and Marco Pistoia from JPMorgan Chase

Many companies face combinatorial optimization problems and across both science and industry there are prominent examples in areas like transportation and logistics, telecommunications, manufacturing, and finance. Analog neutral-atom quantum machines can provide a novel platform to design and implement quantum optimization algorithms, with scientists in both industry and academia searching for the most promising types of problems for which an early quantum advantage could be demonstrated.

Over the past year, the Amazon Quantum Solutions Lab (QSL) has worked together with the Global Technology Applied Research team at JPMorgan Chase to conduct a systematic study of the hardness of certain optimization problems, inspired by real-world use cases in finance.

In this post, we’ll describe our project and summarize the key results. Motivated by recent experiments reporting a potential super-linear quantum speedup [2], we studied the problem native to Rydberg atom arrays (the maximum independent set problem on unit-disk graphs). We identified a class of problem instances with controllable hardness and minimal overhead for neutral atom quantum hardware.

We think this work sets the stage for potentially impactful future experiments towards quantum advantage. For further technical details, feel free to check out our original work published in Physical Review Research [1].

Background

Given its potentially far-reaching impact, the potential demonstration of quantum speedups for practically relevant, computationally hard problems has emerged as one of the greatest milestones in quantum information science. Over the last few years, Rydberg atom arrays have established themselves among the leading contenders for the demonstration of such quantum speedups; see this blog post for more details. In particular, in Ref. [2] a potential super-linear quantum speedup over classical simulated annealing has been reported for the maximum independent set problem (MIS) on unit-disk graphs (MIS-UD), based on variational quantum algorithms run on Rydberg atom arrays with up to 289 qubits arranged in two spatial dimensions.

This work focused on benchmarking quantum variational algorithms against a specific implementation of simulated annealing (representing a classical analogue of the adiabatic algorithm), yet it left open the question of benchmarking against other state-of-the-art classical solvers. In our work, we study the MIS-UD problem in detail (see Figure 1 for a schematic illustration), with a broader range of classical solvers beyond the scope of the original paper. Our main goal is to empirically assess the hardness of the MIS-UD problem, to help zero in on problem instances and system sizes where quantum algorithms could ultimately be useful, and thus identify the most promising directions for future experiments with Rydberg atoms.

Figure 1. Schematic illustration of the problem. (a) We consider unit-disk graphs with nodes arranged on a two-dimensional square lattice with side length L and ~80% of all lattice sites filled, and edges connecting all pairs of nodes within a unit distance (illustrated by the circle). (b) Our goal is to solve the MIS problem on this family of Union-Jack-like instances (as depicted here with nodes colored in red in the right panel) and assess the hardness thereof using both exact and heuristic algorithms.

Figure 1. Schematic illustration of the problem. (a) We consider unit-disk graphs with nodes arranged on a two-dimensional square lattice with side length L and ~80% of all lattice sites filled, and edges connecting all pairs of nodes within a unit distance (illustrated by the circle). (b) Our goal is to solve the MIS problem on this family of Union-Jack-like instances (as depicted here with nodes colored in red in the right panel) and assess the hardness thereof using both exact and heuristic algorithms.

Problem specification

The MIS problem is an important combinatorial optimization problem with practical applications in network design, vehicle routing, and finance, among others, and is closely related to the maximum clique, minimum vertex cover, and set packing problems [3]. Here we provide two complementary problem formulations, one based on an integer linear program and one based on an Ising-type Hamiltonian compatible with Rydberg atom arrays. We discuss important figures of merit to assess problem hardness.

Consider a graph G = (V, E) with vertex set V and edge set E, with an independent set defined as a subset of vertices that are not connected with each other. The MIS problem is then the task to find the largest independent set. Introducing a binary variable xi  for every node (with xi = 1 if node i is in the independent set, and xi = 0 otherwise), the MIS problem can formally be expressed as a compact integer linear program of the form:

with the objective to maximize the marked vertices while adhering to the independence constraint.

Alternatively, a problem formulation commonly used in physics literature expresses this program in terms of a Hamiltonian that includes a soft penalty to non-independent configurations (that is when two vertices in the set are connected by an edge) to model the hard independence constraint. This Hamiltonian is given by

with a negative sign in front of the first term because the largest independent set is searched for within an energy minimization problem, and where the penalty parameter V > 1 enforces the constraints.

Energetically, this Hamiltonian favors having each variable in the state xi = 1 unless a pair of vertices are connected by an edge. This second unconstrained formulation provides a straightforward connection to Rydberg atom arrays. Specifically, by mapping the binary variables xi to two-level Rydberg atoms, MIS-UD problems can be encoded efficiently with Rydberg atoms placed at the vertices of the target problem graph. Strong Rydberg interactions between atoms (as described by the second term in the Hamiltonian) then prevent two neighboring atoms from being simultaneously in the excited Rydberg state. Using a coherent drive with Rabi frequency Ω and detuning ∆, one can then search for the ground state of the Hamiltonian H (encoding the MIS) via, for example, quantum-annealing-type approaches. Compare this blog post for more details on annealing-type quantum optimization algorithms running on Rydberg hardware.

To assess the practical hardness of the MIS-UD problem and compare the performance of various algorithms, we consider the following key figures of merit:

  • Independence number: We use |MIS| to denote the independence number for a given graph (the maximum number of selected nodes for a given graph).
  • Success probability: With PMIS we refer to the probability of observing a MIS within a fixed number of steps (for a single algorithmic shot) for a given graph and given algorithm.
  • Hardness parameter: For a given problem instance, many MIS solutions may be available, with the corresponding MIS degeneracy denoted as DMIS. Similarly, the quantity DMIS-1 refers to the number of independent sets of size |MIS| − 1. Problem hardness can then be specified in terms of the likelihood to find an MIS provided the algorithm is in one of (potentially many) first excited states. Intuitively, the more likely it is to get trapped in a local minimum (as opposed to the global one), the harder the problem. Formally, this is captured by the hardness parameter HP=DMIS-1 /(|MIS|DMIS), with the factor |MIS|DMIS denoting the number of possible transitions from a first excited state into a MIS ground state [2].
  • Time-to-solution (TTS): For exact methods, TTS refers to the time needed to find the optimal solution (ground state). For heuristic methods, by definition provable optimality is not available but we can define a closely related quantity, TTS99, to report the time required to find the solution with 99% success probability.

Algorithmic tool suite

In our work, we studied the MIS-UD problem described earlier using both exact and heuristic algorithms. Here we provide a brief overview of our tool suite – for more technical details we refer to Ref. [1].

  • Sweeping line algorithm (SLA): The SLA is an exact custom solver, designed to exploit the quasi-planar structure of the UD graph. It is based on full enumeration and works by sweeping a fictitious line across the two-dimensional plane. When processed in this ordering, an adaptation of dynamic programming can be used to track only variants that can potentially produce an MIS. SLA is guaranteed to find the optimal solution after N steps, with a runtime O(Nφ√N) with the golden ratio φ=1.62.
  • Branch and bound (B&B) solver: We complement our exact SLA solver with established commercial solvers based on the branch and bound search method. In particular, we used the solver offered by CPLEX; similar results were observed with Gurobi. In practice, these solvers are among the de facto go-to tools for many hard, mixed-integer optimization problems.
  • Simulated annealing (SA) solver: Our SA solver is a heuristic custom solver, built to explore only the feasible space (that is, our solver is designed such that the independence criterion cannot be violated). The SA solver aims to find a high-quality solution by starting from a random initial solution and initially high temperature parameter to then gradually lower temperatures (according to some annealing schedule) until no further improvement is seen. This allows the system to systematically transition from exploration to exploitation, by first exploring the solution space while the temperature is high, but eventually driving the state into a nearby (local, potentially global) optimum when the temperature is low.

Numerical experiments

We now turn to our numerical results. Here we will highlight just a few selected results – for more details we refer to Ref. [1].

TTS as function of system size. First, we study TTS as a function of system size (given by the number of nodes in the graph). Our results are displayed in Fig. 2. We find that typical quasi-planar instances with Union-Jack-like connectivity (as studied in Ref. [2]) can be solved to optimality for up to thousands of nodes within minutes, with both custom and generic commercial solvers on commodity hardware, without any instance-specific fine-tuning. For most instances (taken as 98th percentile) we can upper-bound the TTS needed by classical B&B or SA solvers through a runtime scaling of the form TTS = O(2aN); we find a = 0.0045 and a = 0.0128 for our B&B and SA solvers, respectively; these results set an interesting, putative bar for quantum algorithms to beat. In addition, we observe a relatively large spread spanning several orders of magnitude in TTS, displaying significant instance-to-instance variations, even for fixed system size, thus motivating a more detailed analysis of problem hardness, as discussed next.

Figure 2. TTS as a function of system size. (Left) B&B solver: Problems with hundreds (thousands) of nodes can be solved to optimality in subsecond (minute) timescales. The solid line is the linear regression over instances whose TTS are in the highest 2%. (Right) SA solver: Time required to reach 99% success probability for the heuristic SA solver as a function of system size (how long the solver should run for a 99% chance of finding the optimal solution). For every system size, 1000 random UD instances have been considered.

Figure 2. TTS as a function of system size. (Left) B&B solver: Problems with hundreds (thousands) of nodes can be solved to optimality in subsecond (minute) timescales. The solid line is the linear regression over instances whose TTS are in the highest 2%. (Right) SA solver: Time required to reach 99% success probability for the heuristic SA solver as a function of system size (how long the solver should run for a 99% chance of finding the optimal solution). For every system size, 1000 random UD instances have been considered.

Hardness parameter. We now consider algorithmic performance in terms of the hardness parameter HP that accounts for both the degeneracy of the ground as well as first excited states. Our results for both the exact SLA as well as the heuristic SA solvers are displayed in Fig 3., showing a remarkably different behavior. The SA solver displays a strong dependence on the hardness parameter. Conversely, virtually no dependence is observed for the exact SLA solver, thereby demonstrating that the conductance-like hardness parameter HP successfully captures hardness for algorithms undergoing Markov-chain dynamics, while alternative algorithmic paradigms (like sweeping line) may require a different notion of hardness.

In particular, we find that for the SA solver the success probability PMIS fits well to the functional form PMIS=1-exp(-C HPa), where C refers to a positive fitted constant and smaller values of a  yield larger success rate. We find α≈0.66 for our implementation of SA, competitive with a=0.63 as reported for the optimized quantum algorithm demonstrated in Ref. [2] (for smaller systems than studied here, and when restricting the analysis to graphs with minimum energy gaps sufficiently large to be resolved in the duration of the noisy quantum evolution). This is much better than the SA baseline results in Ref. [2] with α≈1.03. As such, the quantum speedup reported in Ref. [2] could be classified as limited sequential quantum speedup, based on comparing a quantum annealing type algorithm with a particular implementation of classical SA, while our analysis points at a potential next milestone, in the form of the experimental demonstration of a (more general) limited non-tailored quantum speedup, by comparing the performance of the quantum algorithm to the best-known generic classical optimization algorithm.

Figure 3. Dependence on hardness parameter HP (for different system sizes, for lattices with side length L=13 and about 135 nodes up to lattices with L=33 and about 870 nodes). (Left) Time-to-solution (TTS) for the exact SLA solver as a function of the hardness parameter HP. Virtually no dependence on HP is observed, showing that TTS is fully determined by the system size NL^2. (Right) Conversely, for the Markov-chain based SA solver, we observe a strong correlation between algorithmic performance and the hardness parameter HP. Here we plot − log(1 – P_MIS), for UD graphs selected from the top two percentile of hardness parameter for each system size. Power-law fits to the form HP^(-) are used to extract scaling performance with graph hardness.

Figure 3. Dependence on hardness parameter HP (for different system sizes, for lattices with side length L=13 and about 135 nodes up to lattices with L=33 and about 870 nodes). (Left) Time-to-solution (TTS) for the exact SLA solver as a function of the hardness parameter HP. Virtually no dependence on HP is observed, showing that TTS is fully determined by the system size N~L^2. (Right) Conversely, for the Markov-chain based SA solver, we observe a strong correlation between algorithmic performance and the hardness parameter HP. Here we plot − log(1 – P_MIS), for UD graphs selected from the top two percentile of hardness parameter for each system size. Power-law fits to the form ~HP^(-a) are used to extract scaling performance with graph hardness.

Tuning problem hardness. We now study hardness as we gradually change the topology of the problem instances. Specifically, we analyze TTS following two protocols by either (i) systematically tuning the blockade radius or (ii) randomly rewiring edges of the graph. While protocol (i) prepares UD graphs (with varying connectivity), protocol (ii) explicitly breaks the UD structure via random (potentially long-range) interactions, ultimately preparing random structure-less Erdős-Rényi (ER) graphs. The results of this analysis are shown in Fig. 4. We find that TTS for the established B&B solver can be tuned systematically over several orders of magnitude. As such, these two protocols suggest a potential recipe to benchmark quantum algorithms on instances orders of magnitude harder for established classical solvers than previously studied, and motivate interesting future experiments towards quantum advantage; in particular, our protocols help identify small, but hard instances, as needed for thorough scaling analyses.

Figure 4. Hardness transition. (Left) Hardness transition as a function of the disk radius (in units of the lattice spacing), as given by the time-to-solution (TTS) for the B&B solver, shown here for systems with about 350 nodes, with 100 random seeds per radius. (Right) Hardness transition from unit-disk to random Erdős-Rényi (ER) graphs (denoted by the red shaded bands). Here TTS is given as a function of the fraction of edges rewired. Starting from Union-Jack-type UD graphs (left), edges are randomly selected and rewired, thereby gradually breaking the UD connectivity, and ultimately generating random ER graphs (right). While the original UJ graphs can be solved to optimality in ~10^(-2)s, we observe TTS potentially orders of magnitudes larger in both plots.

Figure 4. Hardness transition. (Left) Hardness transition as a function of the disk radius (in units of the lattice spacing), as given by the time-to-solution (TTS) for the B&B solver, shown here for systems with about 350 nodes, with 100 random seeds per radius. (Right) Hardness transition from unit-disk to random Erdős-Rényi (ER) graphs (denoted by the red shaded bands). Here TTS is given as a function of the fraction of edges rewired. Starting from Union-Jack-type UD graphs (left), edges are randomly selected and rewired, thereby gradually breaking the UD connectivity, and ultimately generating random ER graphs (right). While the original UJ graphs can be solved to optimality in ~10^(-2)s, we observe TTS potentially orders of magnitudes larger in both plots.

Conclusion and takeaways

Our work provides an in-depth look into the hardness of the maximum independent set problem on unit-disk graphs, the problem native to Rydberg atom arrays. Our results establish well-defined goal posts for quantum algorithms to beat. In particular, we have shown that the hardness parameter put forward in Ref. [2] captures problem hardness for a certain class of Markov chain Monte Carlo solvers, while virtually no dependence between time-to-solution and this parameter is observed for alternative solvers. Finally, we have identified protocols to systematically tune time-to-solution over several orders of magnitude, pinpointing problem instances orders of magnitude harder for established classical solvers than previously studied.

These results should help identify the most promising directions for applications of Rydberg devices and direct the community’s on-going efforts towards quantum advantage, hopefully inspiring many interesting future experiments further exploring the hardness of the MIS problem with Rydberg atom arrays.

The content and opinions in this blog are those of the third-party author and AWS is not responsible for the content or accuracy of this blog.

This blog post is for informational purposes only and is not intended as legal, tax, financial, investment, accounting or regulatory advice. Opinions expressed herein are the personal views of the individual(s) and do not represent the views of JPMorgan Chase & Co. The accuracy of any statements, linked resources, reported findings or quotations is not the responsibility of JPMorgan Chase & Co.

References

[1] R. S. Andrist, M. J. A. Schuetz, P. Minssen, R. Yalovetzky, S. Chakrabarti, D. Herman, N. Kumar, G. Salton, R. Shaydulin, Y. Sun, M. Pistoia, and H. G. Katzgraber, Hardness of the Maximum Independent Set Problem on Unit-Disk Graphs and Prospects for Quantum Speedups, Phys. Rev. Research 5, 043277 (2023); arXiv:2307.09442.

[2] S. Ebadi, A. Keesling, M. Cain, T. T. Wang, H. Levine, D. Bluvstein, G. Semeghini, A. Omran, J.-G. Liu, R. Samajdar, et al., Quantum optimization of Maximum Independent Set using Rydberg atom arrays, Science 376, 1209 (2022).

[3] J. Wurtz, P. L. S. Lopes, N. Gemelke, A. Keesling, and S. Wang, Industry applications of neutral-atom quantum computing solving Independent Set problems, arXiv:2205.08500 (2022).