Perspectives
Dec 10, 2023

Leveraging Atom's technology to improve constrained optimization algorithms

As Seen In: Atom Computing

Jonathan King, Co-Founder and Chief Scientist

Researchers at Atom Computing have formulated a new method for stabilizing optimization algorithms that could lead to more reliable results from early-stage quantum computers.

In a pre-print article posted on arXiv, the authors describe how the method, known as subspace correction, leverages key features of Atom Computing’s atomic array quantum computing technologies, notably long qubit coherence times and mid-circuit measurement.  The technique detects whether an algorithm is violating constraints – essentially going off track during computation- and then self-corrects.

Subspace correction could reduce the number of circuits developers need to run on a quantum computer to get the correct answer for optimization problems.

We sat down with Dr. Kelly Ann Pawlak and Dr. Jeffrey Epstein, Senior Quantum Applications Engineers at Atom Computing and the lead authors of the paper, to learn more.

Tell us more about this technique. What do you mean by “subspace correction?” How does it work?

Kelly: First let’s talk about quantum error correction, one of the most important topics in quantum computing. In quantum error correction, we use mid-circuit measurements to detect the state of the quantum system and identify whether errors, such as bit flips, have occurred during operation. If we detect an error, we can apply a correction using feed-forward capabilities and resume the calculation.

Subspace correction works the same way, except it isn’t detecting and correcting “operational errors,” from electrical noise or thermal effects. Instead, it probes information relevant to solving a computational problem. In lieu of looking for bit flip errors due to stray radiation, it can check if a solution to industrial optimization problems obeys all the feasibility constraints. It can do so in the middle of a calculation without destroying the “quantumness” of the data.

In short, subspace correction really tries to leverage methods behind quantum error correction.  It is a method of encoding redundancy that checks into a quantum computation in order to solve high-value problems, not  just the basic quantum control problems.

Can you give us background on this technique? What inspired it?

Kelly: Atom Computing is pursuing quantum computer design practices that will enable us to achieve fault-tolerant operation as soon as possible. So for us the engineering pipeline for implementing quantum error correction, rather than general NISQ operation, is our primary focus.

We challenged ourselves to answer the question: “Given our strengths and thoughtful engineering of a platform fully committed to running QEC functionality as soon as possible, are there any problems we can try to speed up on the way to early fault tolerance?” The answer is “Yes!”

It turns out that techniques of error correction are effective algorithmic tools for many problems. As mentioned, we have looked specifically at problems that have constraints, like constrained optimization and satisfaction type problems in classical computing. You can find examples of these problems everywhere in finance, manufacturing, logistics and science.

Jeffrey: One of the things that’s interesting about quantum error correction is that it involves interplay between classical and quantum information. By extracting partial information about the state of the quantum computer and using classical processing to determine appropriate corrections to make, you can protect the coherent, quantum information stored in the remaining degrees of freedom of the processor. The starting point for these kinds of protocols is the choice of a particular subspace of the possible states of the quantum computer, which you take to be the “meaningful” states or the ones on which we encode information.  The whole machinery of quantum error correction is then about returning to this space when physical errors take you outside of it.

Many optimization problems naturally have this structure of “good” states because the problems are defined in terms of two pieces - a constraint and a cost function. The “good” states satisfy the constraint, and the answer should be chosen to minimize the cost function within that set of states. So we would like a method to ensure that our optimization algorithm does in fact return answers that correspond to constraint-satisfying states. As in the case of quantum error correction, we’d like to maintain coherence within this subspace to take advantage of any speedup that may be available for optimization problems. A possible difference is that in some cases, it may be that the method can still work for finding good solutions even if the correction strategy returns the computer to a different point in the good subspace than where it was when an error occurred, which in the case of error correction would correspond to a logical error.

Why is this technique important? How will it help quantum algorithm developers?

Kelly: Right now, it is really difficult to ensure an algorithm obeys the constraints of a problem when run on a gate-based quantum computer. Typically, you run a calculation several times, toss out bad results, and identify a solution.  With subspace correction, you can check to make sure the calculation is obeying constraints and if it is not, correct it.  We think this approach will reduce the number of circuit executions that need to be run on early-stage quantum computers.  It will save a lot of computational time and overhead.

Were you able to simulate or test the technique on optimization problems? If so, what were the results?

Jeffrey: One of the things we show in the paper is that the state preparation protocol we describe for the independent set problem has the same statistics as a classical sampling algorithm. This characteristic makes it possible to simulate its performance on relatively large graphs, although it also directly implies that the method cannot provide a sampling speedup over classical methods. Our hope is that by combining this method with optimization techniques such as Grover search, there might be speedups for some classes of graphs. We’re planning to investigate this possibility in more detail, both theoretically and using larger scale simulations in collaboration with Lawrence Berkeley National Laboratory.

Can subspace correction be applied to other problems?

Kelly: Certainly yes. Constraints are just one kind of “subspace” we can correct. We have a lot of ideas about how to apply this method to improve quantum simulation algorithms. When running a quantum simulation algorithm, you can detect when the simulation goes off course, by breaking energy conservation laws for example, and try to fix it. We would also like to explore using this method to prepare classical data in a quantum computer.

Can this method be used with other types of quantum computers?

Kelly: It could! But it would be nearly impossible for some quantum computing architectures to run parts of this algorithm prior to full fault tolerance due to the long coherence requirements. Even in early fault tolerance, where some QC architectures are racing against the clock to do quantum error correction, it would be very difficult.

Jeffrey: Everything in the method we studied lives within the framework of universal gate-based quantum computing, so our analysis doesn’t actually depend specifically on the hardware Atom Computing or any other quantum computing company is developing - it could be implemented on any platform that supports mid-circuit measurement and feedback. But the performance will depend a lot on the speed of classical computation relative to circuit times and coherence times, and our neutral atom device with long-lived qubits gives us a clear advantage.

A demonstration of how the distribution generating primitive works on a graph using SSC

An overview of the paradigm we use for algorithm development. It has the same pattern as error correction.