This spring, we partnered with Quantum Coalition to offer a challenge at QRISE 2024. This six-week-long event aimed to connect students with quantum computing industry research challenges and help them get started doing research projects of their own.
The challenge we offered to the participants focused on resource estimation of quantum algorithms. Resource estimation helps us answer the question “How many physical qubits and how much time is necessary to execute a quantum algorithm under specific assumptions about the hardware platform used?” Getting these kinds of estimates serves multiple purposes:
- It allows us to deduce the conditions that quantum hardware needs to meet to offer practical quantum advantage.
- It helps us clarify which algorithms truly give quantum advantage over their classical counterparts, and which ones do not, and if they do, what problem instances get the advantage.
- It allows us to compare the efficiency of different algorithms that solve the same problem long before they become viable to run on quantum machines, thus enabling work on improving quantum algorithms.
The goal of the challenge was to implement a quantum algorithm of participants’ choice and obtain and analyze the estimates of resources required for running it on future fault tolerant quantum computers using the Microsoft Azure Quantum Resource Estimator. This is exactly the kind of questions quantum algorithms researchers work on!
Let’s meet the winning teams and learn about their projects in their own words!
Team Qu-Cats
Katie Harrison | Muhammad Waqar Amin | Nikhil Londhe | Sarah Dweik |
Quantum approximate optimization problem (QAOA) is a quantum algorithm used to solve optimization problems. However, QAOA can only solve an optimization problem that can be formulated as a quadratic unconstrained bounded optimization (QUBO) problem. In this project, we have chosen to solve the Number Partitioning Problem (NPP) using QAOA. NPP involves partitioning a given set of numbers to determine whether it is possible to split them into two distinct partitions, where the difference between the total sum of numbers in each partition is minimum. This problem has applications in various fields, including cryptography, task scheduling, and VLSI design. This problem is also recognized for its computational difficulty, often described as the Easiest Hard Problem. In this project, we have accomplished two primary objectives. Initially, we determined the optimal QPU configuration to run QAOA. Subsequently, we conducted an analysis of resource estimates as we scaled the input size.
To determine the best setup for the quantum processing unit (QPU), we evaluated resources for eight different hardware setups, tracking variables like physical qubits, the fraction of qubits used by T-factories, and runtime, among others. The table below details results for the eight different configurations.
In addition, we conducted an analysis of resource estimates across a range of input variables. The plot below represents a segment of the analysis, primarily illustrating how the number of physical qubits varies with increasing input size.
Besides that, we have plotted other variables, such as algorithm qubits, partitions (in NPP), and T-factory qubits. We see that all variables increase as the input size increases. This is expected because from the QUBO cost function we require one bit for every element in the set. We also plotted the number of partitions that represents the scale of the problem for a particular input size. Interestingly, we notice that up to 12 elements, the number of partitions is higher than the number of physical qubits. This indicates that QAOA is at a severe disadvantage compared to the brute-force approach. However, as the number of elements continues to increase beyond 12, the growth in the number of physical qubits slows down.
Team Exponential
Niraj Venkat |
Integer factorization is a well-studied problem in computer science that is the core hardness assumption for the widely used RSA cryptosystem. It is part of a larger framework called the hidden subgroup problem which includes the discrete logarithm, graph isomorphism and the shortest vector problem. State-of-the-art classical algorithms that exist today, such as the number field sieve, can perform factorization in subexponential time. Shor’s algorithm is a famous result that has kicked off the search for practical quantum advantage. It showed that a sufficiently large, fault-tolerant quantum computer can factor integers in polynomial time. Recently, Regev published an algorithm that provides a polynomial speedup over Shor’s, without the need for fault-tolerance. Regev’s result leverages an isomorphism between factoring and the shortest vector problem on lattices, which had remained elusive for more than two decades.
This project provides resource estimates for different variants of Regev’s quantum circuit, by comparing state preparation routines and evaluating recent optimizations to quantum modular exponentiation. In scope for future work is the classical post-processing of the samples from the quantum circuit (more below).
The initial step of Regev’s quantum circuit prepares control qubits in a Gaussian superposition state. For n qubits, this is achieved by discretizing the domain of the Gaussian (normal) probability distribution into 2n equally spaced regions and encoding those cumulative probabilities as amplitudes of the quantum state. For example, here is a visualization of successive sampling of a Gaussian state over n = 4 qubits, plotted using the Q# Histogram:
As we add more shots, the histogram gradually adopts the shape of a bell curve. Such a visual test can be useful during development, especially when running on actual quantum hardware where the quantum state is not available for introspection. This project explores three different algorithms for Gaussian state preparation:
- Q# library
PreparePureStateD
- Arbitrary state preparation by Möttönen et al., similar to above where the amplitudes for each basis state are specified
- Grover-Rudolph state preparation which is meant specifically for probability distributions like the Gaussian, and does not require amplitudes as input
In the resource estimation of the overall quantum circuit, we use the fastest method from the three listed here, namely PreparePureStateD
, to initialize the Gaussian state.
The next step of Regev’s quantum circuit is modular exponentiation on small primes. This project implements two different algorithms:
- Binary exponentiation used in Regev’s original paper
- Fibonacci exponentiation with the Zeckendorf representation of integers, using a fast algorithm for Fibonacci number calculation
Regev’s algorithm uses the quantum computer to sample a multidimensional lattice. In terms of complexity analysis, Gaussian states have properties that work well on such lattices. However, it is unclear whether a Gaussian state is actually required in practice. For this reason, our test matrix looks like this:
Quantum modular exponentiation algorithm used | ||
Control register state preparation algorithm used | Fibonacci exponentiation with uniform superposition | Binary exponentiation with uniform superposition |
Fibonacci exponentiation with Gaussian superposition | Binary exponentiation with Gaussian superposition |
Here are the resource estimation results for different variants of the factoring circuit for N = 143:
The overall winner is Fibonacci exponentiation with a uniform distribution over the control qubits. In this analysis, the size of the control register is fixed to 20 logical qubits for all the four profiles being tested. Preparing a uniform superposition is just a layer of Hadamard gates, which is the same for all problem sizes N. This is clearly advantageous over Gaussian state preparation, where the radius of the Gaussian state required increases exponentially with N.
This project is focused on quantum resource estimation, and for these purposes the classical post-processing of the samples from the quantum circuit is not required. However, this is required for a complete implementation of Regev’s algorithm. Current work includes investigation of lattice reduction techniques, followed by filtering of corrupted samples and fast classical multiplication in order to compute a prime factor. Other state preparation algorithms in the literature – including ones specific to Gaussian states – may also prove beneficial by reducing the gate complexity and number of samples required from the quantum circuit.
0 comments