The post Resource Estimation Challenge at QRISE 2024: Recap appeared first on Q# Blog.
]]>The challenge we offered to the participants focused on resource estimation of quantum algorithms. Resource estimation helps us answer the question “How many physical qubits and how much time is necessary to execute a quantum algorithm under specific assumptions about the hardware platform used?” Getting these kinds of estimates serves multiple purposes:
The goal of the challenge was to implement a quantum algorithm of participants’ choice and obtain and analyze the estimates of resources required for running it on future fault tolerant quantum computers using the Microsoft Azure Quantum Resource Estimator. This is exactly the kind of questions quantum algorithms researchers work on!
Let’s meet the winning teams and learn about their projects in their own words!
Katie Harrison  Muhammad Waqar Amin  Nikhil Londhe  Sarah Dweik 
Quantum approximate optimization problem (QAOA) is a quantum algorithm used to solve optimization problems. However, QAOA can only solve an optimization problem that can be formulated as a quadratic unconstrained bounded optimization (QUBO) problem. In this project, we have chosen to solve the Number Partitioning Problem (NPP) using QAOA. NPP involves partitioning a given set of numbers to determine whether it is possible to split them into two distinct partitions, where the difference between the total sum of numbers in each partition is minimum. This problem has applications in various fields, including cryptography, task scheduling, and VLSI design. This problem is also recognized for its computational difficulty, often described as the Easiest Hard Problem. In this project, we have accomplished two primary objectives. Initially, we determined the optimal QPU configuration to run QAOA. Subsequently, we conducted an analysis of resource estimates as we scaled the input size.
To determine the best setup for the quantum processing unit (QPU), we evaluated resources for eight different hardware setups, tracking variables like physical qubits, the fraction of qubits used by Tfactories, and runtime, among others. The table below details results for the eight different configurations.
In addition, we conducted an analysis of resource estimates across a range of input variables. The plot below represents a segment of the analysis, primarily illustrating how the number of physical qubits varies with increasing input size.
Besides that, we have plotted other variables, such as algorithm qubits, partitions (in NPP), and Tfactory qubits. We see that all variables increase as the input size increases. This is expected because from the QUBO cost function we require one bit for every element in the set. We also plotted the number of partitions that represents the scale of the problem for a particular input size. Interestingly, we notice that up to 12 elements, the number of partitions is higher than the number of physical qubits. This indicates that QAOA is at a severe disadvantage compared to the bruteforce approach. However, as the number of elements continues to increase beyond 12, the growth in the number of physical qubits slows down.
Niraj Venkat 
Integer factorization is a wellstudied problem in computer science that is the core hardness assumption for the widely used RSA cryptosystem. It is part of a larger framework called the hidden subgroup problem which includes the discrete logarithm, graph isomorphism and the shortest vector problem. Stateoftheart classical algorithms that exist today, such as the number field sieve, can perform factorization in subexponential time. Shor’s algorithm is a famous result that has kicked off the search for practical quantum advantage. It showed that a sufficiently large, faulttolerant quantum computer can factor integers in polynomial time. Recently, Regev published an algorithm that provides a polynomial speedup over Shor’s, without the need for faulttolerance. Regev’s result leverages an isomorphism between factoring and the shortest vector problem on lattices, which had remained elusive for more than two decades.
This project provides resource estimates for different variants of Regev’s quantum circuit, by comparing state preparation routines and evaluating recent optimizations to quantum modular exponentiation. In scope for future work is the classical postprocessing of the samples from the quantum circuit (more below).
The initial step of Regev’s quantum circuit prepares control qubits in a Gaussian superposition state. For n qubits, this is achieved by discretizing the domain of the Gaussian (normal) probability distribution into 2^{n} equally spaced regions and encoding those cumulative probabilities as amplitudes of the quantum state. For example, here is a visualization of successive sampling of a Gaussian state over n = 4 qubits, plotted using the Q# Histogram:
As we add more shots, the histogram gradually adopts the shape of a bell curve. Such a visual test can be useful during development, especially when running on actual quantum hardware where the quantum state is not available for introspection. This project explores three different algorithms for Gaussian state preparation:
PreparePureStateD
In the resource estimation of the overall quantum circuit, we use the fastest method from the three listed here, namely PreparePureStateD
, to initialize the Gaussian state.
The next step of Regev’s quantum circuit is modular exponentiation on small primes. This project implements two different algorithms:
Regev’s algorithm uses the quantum computer to sample a multidimensional lattice. In terms of complexity analysis, Gaussian states have properties that work well on such lattices. However, it is unclear whether a Gaussian state is actually required in practice. For this reason, our test matrix looks like this:
Quantum modular exponentiation algorithm used  
Control register state preparation algorithm used  Fibonacci exponentiation with uniform superposition  Binary exponentiation with uniform superposition 
Fibonacci exponentiation with Gaussian superposition  Binary exponentiation with Gaussian superposition 
Here are the resource estimation results for different variants of the factoring circuit for N = 143:
The overall winner is Fibonacci exponentiation with a uniform distribution over the control qubits. In this analysis, the size of the control register is fixed to 20 logical qubits for all the four profiles being tested. Preparing a uniform superposition is just a layer of Hadamard gates, which is the same for all problem sizes N. This is clearly advantageous over Gaussian state preparation, where the radius of the Gaussian state required increases exponentially with N.
This project is focused on quantum resource estimation, and for these purposes the classical postprocessing of the samples from the quantum circuit is not required. However, this is required for a complete implementation of Regev’s algorithm. Current work includes investigation of lattice reduction techniques, followed by filtering of corrupted samples and fast classical multiplication in order to compute a prime factor. Other state preparation algorithms in the literature – including ones specific to Gaussian states – may also prove beneficial by reducing the gate complexity and number of samples required from the quantum circuit.
The post Resource Estimation Challenge at QRISE 2024: Recap appeared first on Q# Blog.
]]>The post Integrated Hybrid Support in the Azure Quantum Development Kit appeared first on Q# Blog.
]]>Last year, we released Azure Quantum’s Integrated Hybrid feature, enabling users to develop their hybrid quantum programs using Q# and the QDK. Since then, we have modernized the QDK, but the initial release did not have support for this feature. After months of dedicated development, we are excited to announce that the QDK again has support for implementing hybrid quantum programs!
Not only we have added support for these advanced capabilities, but we have also made significant improvements to the development experience, and users now have:
Hybrid quantum computing refers to the process and architecture of a classical computer and a quantum computer working together to solve a problem. Integrated hybrid quantum computing is a specific kind of architecture that allows classical computations to be performed while qubits are coherent. This capability in combination with midcircuit measurement enables features like branching based on measurement and realtime integer computations. These features represent a step forward in the use of highlevel programming constructs in quantum applications, opening the door to a new generation of hybrid algorithms such as adaptive phase estimation, returnuntil–success, and some quantum error correction schemes.
At its most basic form, integrated hybrid quantum computing enables you to perform different operations based on the results from a qubit measurement. For example, the following code snippet conditionally applies an X operation to one qubit if the result of the measurement of another qubit is One:
namespace MyQuantumHybridProgram {
@EntryPoint()
operation Main() : Result {
use qs = Qubit[2];
H(qs[0]);
if MResetZ(qs[0]) == One {
X(qs[1]);
}
return MResetZ(qs[1]);
}
}
Conditionally applying quantum gates based on measurement results is a feature that can be used for error correction. You can imagine how you can perform syndrome measurement and based on it apply the appropriate corrections.
You can also use other familiar Q# constructs such as loops and even integer computations that are performed while qubits are coherent. For example, the following program keeps track of how many times a measurement resulted in One and returns a Bool representing whether the count is an even number. Moreover, the program also takes advantage of another hybrid quantum computing feature, qubit reuse, which allows us to just use one qubit instead of the five that would be otherwise required. Note that all of this is automatically handled by the Q# compiler.
namespace MyQuantumHybridProgram {
@EntryPoint()
operation Main() : Bool {
use q = Qubit();
mutable count = 0;
let limit = 5;
for _ in 1..limit {
// Here we take advantage of an integrated
// hybrid capability, qubit reuse, so we
// can repeat this logic many times without
// having to use a different qubit each time.
H(q);
if MResetZ(q) == One {
set count += 1;
}
}
return count % 2 == 0;
}
}
The ability to perform different computations, either classical or quantum, opens the door to the development of new innovative algorithms that are inherently hybrid.
You can run hybrid quantum programs both from Visual Studio Code and Python. In both cases, when working with a Q# program, select QIR Adaptive RI as the Q# target profile. This will enable the QDK to provide accurate designtime feedback. Diving into the details of the QIR Adaptive RI profile:
Currently, Quantinuum is the only provider in Azure Quantum that supports integrated hybrid quantum computing, so make sure you submit your programs to their targets.
Once you have set up the Q# target profile, the QDK provides designtime feedback about Q# patterns that are not supported by the chosen quantum target.
Let’s look at an example of the kind of feedback the QDK provides. Consider the following code snippet:
namespace MyHybridQuantumProgram {
@EntryPoint()
operation Main() : Int {
use q = Qubit();
H(q);
let result = MResetZ(q);
// We use the measurement result to determine
// the value of variables of different types.
// We refer to these variables and values as dynamic.
// Dynamic Bool and Int values are supported by the
// QIR Adaptive RI profile.
let dynamicBool = result == One ? true  false;
let dynamicInt = result == Zero ? 0  1;
// Dynamic Double values are not supported by the
// QIR Adaptive RI profile so the following line
// will result in a compilation error.
let dynamicDouble = result == Zero ? 0.  1.;
// The QIR Adaptive RI profile supports returning
// dynamic values of type Result, Bool and Int.
return dynamicInt;
}
}
In this program, we use a qubit measurement to determine the value of Bool, Int, and Double variables. Since both dynamic Bool and Int values are supported by the QIR Adaptive RI profile, the compiler does not produce any errors in the lines of code where the dynamicBool and dynamicInt variables are bound. However, since dynamic Double values are not supported by this same profile, the compiler produces an error like the following in the line of code where the dynamicDouble variable is bound:
This is just one example of how the Q# compiler provides designtime feedback to guide you on what kind of programs integrated hybrid targets can execute. The accuracy and usefulness of the feedback has significantly improved compared to the previous QDK, in which the compiler could not determine whether it was possible to execute a program on a quantum target before its submission. With the latest version of the QDK, programs execute more reliably when submitted to Azure Quantum targets.
Another improvement that we have made is that we heavily optimize classical computations that do not need to be executed during coherence time. For example, in the following code snippet the loop limit calculation is relatively complex. Even though integer computation support makes it possible to perform this calculation while qubits are coherent, the program does not strictly require it. Since computing resources on current quantum computers are limited, the Q# compiler precomputes anything that it can to reduce the number of computations that the quantum computer needs to perform, no matter the data type. In this program, the compiler computes the value of the limit variable, unrolls the loop and computes the value of angle for each iteration.
namespace MyHybridQuantumProgram {
open Microsoft.Quantum.Convert;
open Microsoft.Quantum.Math;
@EntryPoint()
operation Main() : Result {
use q = Qubit();
let seed = 42;
let limit = ((seed + 10) % 5) * (seed ^ 2);
for idx in 0 .. limit {
let angle = IntAsDouble(idx) * PI();
Rx(angle, q);
}
return MResetZ(q);
}
}
If you want to experiment with the most advanced capabilities quantum devices currently offer, install the Azure Quantum Development Kit VS Code Extension or install the qsharp Python package, and start implementing your own quantum hybrid programs. You can get inspiration to develop your own hybrid quantum algorithms from our samples and experiments. We are excited to see what you can accomplish!
The post Integrated Hybrid Support in the Azure Quantum Development Kit appeared first on Q# Blog.
]]>The post Evaluating cat qubits for faulttolerant quantum computing using Azure Quantum Resource Estimator appeared first on Q# Blog.
]]>This blog post highlights a recent collaboration between Microsoft and Alice & Bob, a French startup whose goal is to build a faulttolerant quantum computer by leveraging a superconducting qubit called a cat qubit. In this collaboration, Alice & Bob uses the new extensibility mechanisms of Microsoft’s Resource Estimator to obtain resource estimates for their cat qubit architecture.
The Resource Estimator is a tool that can help evaluate the practical benefit of quantum algorithms. It calculates an estimate for the expected runtime and the number of physical qubits needed to run a given program under different settings of the target faulttolerant quantum computer. The default settings of the resource estimator represent generic gatebased and Majoranabased qubits, unbiased planar quantum error correction codes (i.e., 2D layout for logical qubits assuming the same error rates for bit flip and phase flip errors) that support lattice surgery, and T factories that use multiple rounds of distillation (please refer to this paper for more details on these assumptions). These settings cover many quantum computing architectures, but they do not have complete flexibility for quantum architects to model various other important system architectures with different assumptions.
Microsoft is happy to announce that the Resource Estimator, which was made open source in January 2024, now has an extensibility API to model any quantum architecture and to modify all assumptions. To show how this extensibility API works, Microsoft and Alice & Bob demonstrate how it is used to model Alice & Bob’s cat qubit architecture, along with a biased repetition code, and Toffoli factories. The opensource example performs the resource estimation for elliptic curve cryptography described in Alice & Bob’s Physical Review Letters paper from July 2023.
Cat qubits have special error correction requirements because they exhibit a biased noise: they have several orders of magnitude less bit flips than phase flips. They use engineered two photon dissipation to stabilize two coherent states of the same amplitude and opposite phase, used as the 0 and 1 of the qubits. The Alice & Bob roadmap takes advantage of this asymmetry to simplify the error correction strategy. To achieve this however, the usual hierarchy of gates used in quantum computing has to be modified. As a first step, we need to build a gate set that protects this noisebiasing property. And then, from this set, they have to offer a universal set of faulttolerant operations (note that the biaspreserving gate set is typically not universal, but sufficient to implement a universal gate set at the logical level). This work is carried in the article Repetition Cat Qubits for FaultTolerant Quantum Computation and summarized in the figure below.
Alice & Bob’s architecture highlights the importance of extensibility in the Resource Estimator and the ability to override the predefined settings. The typical error correction code, used by the Resource Estimator, is the surface code, but cat qubits require a repetition code. The Resource Estimator assumes a “Clifford+T” universal gate set, while the gate set presented above for cat qubits is “Clifford+Toffoli.”
The resource estimator, which is written in Rust, can be extended by using a Rust API. The main function of the resource estimator is to calculate the physical resource estimates for a logical overhead with respect to an error correction protocol, a physical qubit, and a factory builder. The interaction of these components is illustrated in the architecture diagram above. Each of these components are interfaces that can be implemented, which allows full flexibility. For instance, the resource estimator doesn’t have to know about the input program, or even the layout method. It only needs the logical overhead, which gives the number of logical qubits, the logical depth, and the number of needed magic states. Likewise, the implementations of the other interfaces provide information for the resource estimation. We will explain some aspects of the implementation in the remainder of this section but please refer to the example source code in GitHub for more details.
The error correction protocol in the Resource Estimator defines both the physical qubit and the code parameter that it uses. For most codes, the code parameter is the code distance, and finding a value for the code distance that ensures a desired logical error rate given a physical qubit is one of the main goals of the error correction protocol. The Alice & Bob architecture uses a repetition code with two parameters: distance and the average number of photons. The distance deals with the phase flip error and the number of photons must be high enough to avoid bit flip errors, so that the repetition code can focus on correcting only the phase flip errors.
A factory builder’s job is to make magic state factories that produce magic states with a certain maximum output error probability. The factories can be either precomputed or they can be calculated as needed, when a new request is made. Also, they can use the error correction protocol and select their own code parameters to make the factories. For Alice & Bob’s architecture, the magic state that is produced is CCX and there’s a precomputed list of Toffoli factories available (see also Table 3 in the paper).
We make two main assumptions about the input program: that it uses mostly CX (or CNOT) and CCX (or Toffoli) gates, and that they aren’t run in parallel, but each have their own cycle time (i.e., the number of needed error correction syndrome extraction cycles). With these assumptions, and the number of logical algorithm qubits before taking into account the layout, we can easily calculate the layout overhead as a function of the number of logical qubits and the number of CX and CCX gates. Moreover, the paper from Alice & Bob gives formulas to find values for these three metrics for the elliptic curve cryptography algorithm, and so the layout overhead can be generated as a function of the key size and some implementation details (such as the window size for windowed arithmetic). Moreover, we use the Azure Quantum Development Kit (QDK) to compute a logical overhead by evaluating a Q# program.
The above graph compares the spacetime tradeoff of resource estimates using the resource estimator and the estimates from the paper. The paper reported a quicker solution that needed more qubits, while the resource estimator produced estimates with fewer qubits and a longer, but feasible, runtime. Note that the resource estimator does not automatically explore application specific parameters (such as window sizes for windowed arithmetic).
You can try out and execute the Alice & Bob resource estimation example that uses Microsoft’s Resource Estimator. As it is open source, you can easily change the application input. The cost model that relies on CX and CCX gates is compatible with many logical resource estimation research papers in the literature, and therefore results from those papers can be quickly converted into physical resource estimates. Further, you can examine various Q# programs that are available in the Q# GitHub repository. We hope that the resource estimator gives you useful insights and helps your research; and we would welcome your feedback.
The post Evaluating cat qubits for faulttolerant quantum computing using Azure Quantum Resource Estimator appeared first on Q# Blog.
]]>The post Circuit Diagrams with Q# appeared first on Q# Blog.
]]>I’m a software engineer in the Azure Quantum Development Kit team, and I’m very excited to share a new feature I’ve been working on: circuit visualization in Q#.
One of the neat things about Q# is that it gives you the ability to express quantum algorithms in a procedural language that’s reminiscent of classical programming languages such as C and Python. If you’re already a programmer, this way of thinking will be very intuitive to you, and you can get started with quantum computing right away (if you haven’t done so yet, check out our quantum katas).
However, this isn’t how many people learn about quantum computing today. If you flip through any quantum computing textbook, you’ll see that it’s conventional to think in terms of quantum circuits.
We wanted to bridge the gap between these two different modes of thinking.
If you open any Q# program in VS Code, you’ll notice a little “Circuit” CodeLens above the entry point declaration. When you click on that, your Q# program will be represented as a quantum circuit diagram.
Being able to go from Q# code to circuit diagrams means that you can use familiar constructs such as for
loops and if
statements in your program to manipulate the quantum state while being able to view the logical circuit at any time to get a highlevel view of your algorithm.
How does this work? The quantum circuit for a Q# program is generated by executing all the classical parts of the program while keeping track of when qubits are allocated and which quantum gates are applied. This data is then displayed as a quantum circuit.
Not all quantum programs can be represented as straightforward quantum circuits. What if we have a dynamic (commonly known as “adaptive”) circuit? Say we have a while
loop in our program that compares measurement results and takes an action that depends on the result. The exact set of gates in the program will not be deterministic anymore.
That’s when we need to run the program through the quantum simulator. This is called “trace” mode since we’re tracing the quantum operations as they are actually performed in the simulator. When the circuit visualizer detects that the program contains measurement comparisons, this mode is activated.
Depending on your luck, you may end up with two gates, or you may end up with many more!
Each time you generate the circuit, you may see a different outcome in the circuit diagram.
It would certainly be nice to visualize all the outcomes at once, and we’re working through some ideas on how to do that. Simple conditionals can be represented as gates controlled by classical wires. But given a language as expressive as Q#, you can write complex conditionals that are difficult to visualize on a single 2D circuit diagram. How would you represent an adaptive circuit such as the one above? We’d love to hear your ideas. You can leave a comment here or on this GitHub issue.
Working on this feature sparked a lot of lively debate within the team, especially during the design stage. We’re a team with diverse technical backgrounds. Some of us found it very intuitive to think in terms of circuits. Others preferred reading code and thought circuit diagrams were very limiting. Did we even need the feature at all?
I now realize it’s not eitheror: it’s very powerful to be able to do both. Even if you prefer one paradigm over the other, being able to inspect your code through different lenses really deepens your understanding of the problem you’re working on. You can run simulations and look at a histogram of the results. You can step through the code using the Q# debugger. And now you can view it as a circuit diagram. Each different view into the problem offers a different insight.
This is also why testing this feature was so fun for me. I’m far from an expert in quantum computing; some of our Q# samples are admittedly still confusing to me. As I ran the circuit visualizer on each sample, giving it a final lookover, I found the process unexpectedly satisfying. I felt like I was finally starting to understand what these algorithms were doing. I’m happy for this new addition to my learning toolkit!
If you’d like to try out Q# circuit diagrams for yourself, head over to The Azure Quantum Playground and give it a try now – no installation necessary. When you’re ready to work on your own Q# projects, install the Azure Quantum Development Kit VS Code Extension. If you prefer working in Python, head over to the documentation for instructions on how to get started in Jupyter Notebooks. Let us know what you think!
The post Circuit Diagrams with Q# appeared first on Q# Blog.
]]>The post Exploring spacetime tradeoffs with Azure Quantum Resource Estimator appeared first on Q# Blog.
]]>We are delighted to present a new experience for exploring spacetime tradeoffs recently added to the Azure Quantum Resource Estimator. Available both as Azure Quantum Development Kit (VS Code extension) and a Python package, it adds a new dimension to estimates.
Resource estimation doesn’t just yield a single group of numbers (one per objective), but rather multiple points representing tradeoffs between objectives, such as qubit number and runtime. Our recent update of the Azure Quantum Resource Estimator adds methods for finding such tradeoffs for a given quantum algorithm and a given quantum computing stack. We also provide a visual experience to navigate alternatives with an interactive chart and supplementary reports and diagrams:
This chart illustrates tradeoffs between qubit numbers and runtimes required for running the same algorithm across multiple projected quantum computers. See estimationfrontierwidgets.ipynb for steps to learn how to generate this diagram.
More specifically, we have considered the simulation of the dynamics of a quantum magnet, the socalled Ising model on a square 10×10 lattice. This is the simplest model for ferromagnetism in a quantum system, and the algorithm simulates its evolution over time. At this system size the problem cannot be simulated on classic computers in reasonable time and solutions with computers would be highly desired.
The diagram above and the table
show that this algorithm requires 230 logical qubits with low error rates. Such logical qubits don’t exist yet, and it will require hundreds of noisy physical qubits per each logical. So, the total number of high error rate physical qubits requiring for the simulation ranges from 33,000 to 261,340.
You also can notice on the chart that increasing the number of utilized physical qubits by 1035 times reduces the runtime 120250 times. A thoughtful analysis of tradeoffs for entire algorithms and for subroutines can save a lot of runtime if extra qubit resources are available.
Compromises between the number of physical qubits and the runtime in quantum computations are like ones between space and time utilization in classic computing. As we have done above, for a given algorithm, one can start estimates by computing the minimal number of physical qubits required for its execution on a given quantum stack, and then deduce the corresponding runtime. If more physical qubits are available, one can accelerate the runtime by parallelizing execution of the algorithm or its subroutines.
One can build multiple estimates by allowing more and more physical qubits and improving the runtime. We can consider efficient estimates, such that in each pair of estimates, one would be better than the other with respect to runtime, and the other would be better with respect to the number of physical qubits. The set of such estimates forms the socalled Pareto frontier which is represented by monotonous decreasing plots on the spacetime diagram.
Just as in classical programs, there are many opportunities for spacetime tradeoffs in the choice of quantum algorithms and their implementation. Here we want to discuss another, quantumspecific opportunity. Rotation gates that rotate logical qubits by arbitrary angles require socalled magic states which are generated in a process known as Magic State Distillation happening in a set of qubits called the “magic state factories”. Many quantum stacks use the T gate as the only magic gate, and corresponding states and factories become Tstates and Tfactories, and we use those names in the Resource Estimator.
Tstate generation subroutines are executed in parallel with the main algorithm. Let us start with a single Tfactory. For some algorithms, it could produce enough Tstates corresponding for the algorithm consumption. For other algorithms requiring more Tstates, the algorithm execution will be slowed down, waiting for the next Tstate to be produced. Note that idling of an algorithm is not free in the quantum world because errors in quantum states will occur while waiting. Longer runtimes might thus require a higher error correction code distance and with it more physical qubits and longer runtimes than might naively be estimated.
If an algorithm waits for new Tstates and there are more qubits available, we can add additional Tfactories to produce more Tstates. This saves runtime at the cost of more physical qubits. Having enough physical qubits available, we can increase the number of Tfactories until they produce enough Tstates for algorithm consumption without idling. This will give the shortest runtime of the algorithm. For example, the algorithm considered above could efficiently use up to 172251 Tfactories depending on the computing stack. This involves spending from 92.29% to 98.40% of its resources for Tstates distillation.
As shown in estimationfrontierwidgets.ipynb, to estimate resources required to run a Q# program, one has to run
result = qsharp.estimate(entry_expression, params)
where the “entry_expression” refers to the entry point of the program and params could cover multiple quantum stack configurations and estimation parameters as well.
When “estimateType”: “frontier” is set, the estimator searches for the whole frontier of estimates, otherwise, it looks for the shortest runtime solution only.
Executing the
EstimatesOverview(result)
command visualizes all the estimates found in result (frontier and individual as well) with the spacetime diagram and the summary table.
Selecting rows on the summary table or point on the spacetime diagram generates the space diagram and the detailed report:
“EstimatesOverview” supports optional parameters for custom color schemes on the spacetime diagram and custom series name for the summary table.
More tips and tricks for the “EstimatesOverview” and for supplementary visualization elements are available at estimationfrontierwidgets.ipynb.
Estimating resources for quantum algorithm executions goes beyond providing a single pair of numbers — the runtime and the number of physical qubits. It requires constructing and analyzing the entire frontier of tradeoffs between those objectives. The Azure Quantum Resource Estimator allows you to build and explore those tradeoff frontiers and more accurately evaluate your requirements. With this new data, you can determine if you need to improve your algorithm, develop new error correction codes, or explore alternate qubit technologies.
The Azure Quantum team is committed to continuous improvements in the Resource Estimator. This tool supports both our internal teams and external researchers in the pursuit of designing quantum computers.
Our primary focus is on enhancing the precision of estimates and offering expanded estimation capabilities.
We eagerly welcome your feedback on the specific custom options you require for estimating your quantum computer resources. Your insights will play a vital role in refining our tool, making it even more effective for the entire quantum community.
The post Exploring spacetime tradeoffs with Azure Quantum Resource Estimator appeared first on Q# Blog.
]]>The post Design Fault Tolerant Quantum Computing applications with the opensource Resource Estimator appeared first on Q# Blog.
]]>Quantum computing has the potential for widespread societal and scientific impact, and many applications have been proposed for quantum computers. The quantum community has reached a consensus that NISQ machines do not offer practical quantum advantage and that it is time to graduate to the next of the three implementation levels.
Unlike computing with transistors, basic operations with qubits are much more complicated and an order of magnitude slower. We now understand that practical quantum advantage will be achieved for smalldata problems that offer superpolynomial quantum speedup (see T. Hoefler et al, CACM 66, 8287). This includes, specifically, the simulation of quantum systems in quantum physics, chemistry, and materials science.
But at a basic level, there are still many remaining open questions: What are the most promising and useful quantum algorithms on which to build useful quantum applications? Which quantum computing architectures and qubit technologies can reach the necessary scale to run such quantum accelerated applications? Which qubit technologies are well suited to practical quantum supercomputers? Which quantum computing technologies are unlikely to achieve the necessary scale?
That’s why we need the Resource Estimator to help us answer these questions and guide today’s research and development toward logical qubit applications.
Achieving practical quantum advantage will require improvements and domain expertise at every level of the quantum computing stack. A unified opensource tool to benchmark solutions and collaborate across disciplines will speed up our path toward a quantum supercomputer: this is the premise of Azure Quantum Resource Estimator.
Whether you are developing applications, researching algorithms, designing language compilers and optimizers, creating new error correction codes, or working on R&D for faster, smaller and more reliable qubits, the Resource Estimator helps you assess how your theoretical or empirical enhancements can improve the whole stack.
As an individual researcher, you can leverage prebuilt options to focus on your area. If you are part of a team, you can work collectively at every level of the stack and see the results of your combined efforts.
The Resource Estimator is an estimation platform that lets you start with minimal inputs, abstracting the many specificities of quantum systems. If you require more control, you can adjust and explore a vast number of system characteristics.
The Resource Estimator can quickly explore thousands of possible solutions. This accelerates the development lifecycle and lets you easily review tradeoffs between computation time and number of physical qubits.
The table below summarizes some of the ways you can adapt the Resource Estimator to your needs, allowing you to specify both the description of the quantum system and to control the exploration of estimates. Explore all available parameters.
Describe your system  Explore and control estimates 


*Currently requires an Azure Subscription
If you are ready to get started, you can choose from:
Read more from the documentation.
To join the discussion or contribute to the development of the Resource Estimator, visit https://aka.ms/AQ/RE/OpenSource.
20240129 update: This feature is now available. Learn more from the Pareto frontier documentation.
Understanding the tradeoff between runtime and system scale is one of the more important aspects of resource estimation. To help you better understand and visualize the tradeoffs, the Resource Estimator will soon provide fully automated exploration and graphics, such as the one below:
Make sure to subscribe to the Q# blog to be notified of this feature’s availability.
The post Design Fault Tolerant Quantum Computing applications with the opensource Resource Estimator appeared first on Q# Blog.
]]>The post Announcing v1.0 of the Azure Quantum Development Kit appeared first on Q# Blog.
]]>As outlined in an earlier blog post, this is a significant rewrite over the prior QDK with an emphasis on speed, simplicity, and a delightful experience. Review that post for the technical details on how we rebuilt it, but at a product level the rewrite has enabled us to make some incredible improvements that exceeded the expectations we set out with, some highlights being:
And much more! This post will include lots of video clips to try and highlight some of these experiences (all videos were recorded in real time).
For the fastest getting started experience, just go to https://vscode.dev/quantum/playground/ . The QDK extension for VS Code works fully in VS Code for the Web, and this URL loads an instance of VS Code in the browser with the QDK extension preinstalled, along with a virtual file system preloaded with some common quantum algorithms. You can experiment here, then simply close the browser tab when done, without installing anything or accessing any files on your local machine.
If using VS Code on your local machine (or using https://vscode.dev directly), then installing the extension is a snap. Simply go to the VS Code Extension Marketplace, search for “QDK”, and install the “Azure Quantum Development Kit” extension published by “Microsoft DevLabs” (direct link). The extension is lightweight with no dependencies and will install in seconds, as shown below.
Once the extension is running, you can open a Q# file (with a .qs extension) and start coding. The below clip demonstrates how to create a new Q# file, use one of the sample ‘snippets’ to quickly insert a wellknown algorithm, and then use the built in simulator to run the code and see the output (including quantum state dumps and debug messages).
(Note: If unfamiliar with Q#, or quantum development in general, then the Quantum Katas are a great way to learn in an interactive AI assisted experience).
We believe the true power of quantum computing will be realized once we reach “scalable quantum computing”, and the Q# language was designed for this. It includes both higher level abstractions to more naturally express quantum operations, as well as being a typed language to help develop, refactor, and collaborate on more complex programs. (See the “Why do we need Q#” blog post for more background).
For this release we’ve invested heavily on the editor features developers expect from a modern and productive language. This includes:
The Q# editor provides completion lists, autoopen of namespaces, signature help, hover information, goto definition, rename identifier, syntax and typechecking errors, and more! All behave as developers familiar with other strongly typed languages such as Rust, C#, TypeScript, etc. have come to expect.
We’ve designed the experience to be as smooth as possible and to work as fast as you can type. Many of these features are available not only while editing Q# files directly, but also when writing Q# code in Jupyter Notebook cells, as shown in the clip below.
A quantum simulator is critical when developing quantum programs, and the QDK includes a sparse simulator that enables the output of diagnostic messages and quantum state as it runs in both the VS Code extension and the Python package.
The VS Code integration takes this up a notch, and the QDK brings a powerful debugging experience to Q# development. You can set breakpoints, step in and out of operations, and view both the quantum and classical state as you step through the code. It also includes some quantumspecific goodness, such as stepping through loops & operations backwards when running the generated adjoint of an operation. We’re very excited about the productivity this can unlock, and some of the ideas for where we could take it even further in future releases.
Today’s quantum hardware is still quite limited in terms of practical application, and we are still in what is termed the “Noisy Intermediate Scale Quantum” era. We consider this Level 1 in a roadmap to a quantum supercomputer. The industry is making great strides towards Level 2 currently, when it will become possible to start using “logical qubits” on real hardware. Achieving practical quantum advantage for useful problems will require logical qubits.
As with early classical computers, there will be considerable resource constraints for a number of years. (My first computer had 16KB of RAM and a cassette tape for storage!). Developing code that can squeeze the most out of the hardware will be critical to building useful applications and to advancing the field generally. There are numerous factors such as qubit types, error correction schemas, layout & connectivity, etc. that determine how a program using logical qubits maps to physical resource requirements.
Over the past year we’ve built numerous capabilities into our Azure Quantum service to assist with Resource Estimation (see the docs for details). With this release of the QDK, we’re bringing many of those capabilities directly into the client, enabling a rapid getting started experience and a very fast innerloop to enable quantum developers to experiment and view resource requirements for their code as quickly as possible. This is an area we will continue to invest in to add capabilities for developers & researchers throughout the quantum stack to make rapid progress and develop new insights.
In the below clip showing VS Code in the browser, the “Calculate Resource Estimates” command is run to view the estimates for various qubit types and other parameters. Once complete, this brings up a comparison table, and as rows are selected a visualization chart and detailed table of results is shown for the selected hardware configuration.
If you’d like to try this exact code in the Resource Estimator, you can visit the code sharing link used in the video. (Note this code is designed for resource estimation and is unlikely to finish if you try to actually run it in the simulator).
The QDK extension in VS Code enables you to connect to a Quantum Workspace in your Azure Subscription. You can then directly submit your Q# program from the editor to one of our hardware partners. You can see the status of the jobs and download the results when completed. This provides for a simple and streamlined experience, reducing the need to switch to CLI tools or Python code to work with the service. (Though using the service via those methods is still fully supported).
Current quantum hardware is limited compared to simulator capabilities, and thus the compiler must be set to the ‘base’ profile in the QDK for programs to be runnable on a real quantum machine. If the compiler is set to ‘base’ profile and a program tries to use unavailable capabilities, then the editor will immediately show an error, avoiding the need to submit potentially invalid code and then wait to see if an error occurs from the service.
Note: VS Code had already signedin and authenticated with the subscription account in this recording. On first run you may need to authenticate with the Microsoft Account for the subscription and consent to access.
There are more editor features than can be covered here, including builtin histograms, project support, viewing the QIR for a compiled program, etc. See the documentation for more details.
Much work in the quantum space happens via Python in Jupyter Notebooks. Beyond the rich tooling for working with Q# directly, we’ve also revamped and refined our Python packages and Jupyter Notebooks support.
For general Q# simulation and compilation all you need is “pip install qsharp”. This package is only a couple of MB with no dependencies, and compiled to binary wheels for Windows, Mac, and Linux for x64 and ARM64 – so installation should be pain free and near instant in most environments. If you will be using Jupyter Notebooks then you may also want to install the “qsharpwidgets” package for some nice visualizations for resource estimation and histograms.
If you will be using JupyterLab in the browser, then install the ‘qsharpjupyterlab’ package to get Q# cell syntax highlighting. However, we recommend using the VS Code support for Jupyter Notebooks as this provides some of the rich language service features outlined above when working with Q#.
You can use the VS Code command “Create an Azure Quantum Notebook” to generate a sample Jupyter Notebook. If you have connected to an Azure Quantum Workspace already as outlined above, then this Notebook will be prepopulated with the correct Azure Quantum Workspace connection settings.
If you were using the prior QDK, which we now refer to as the ‘Classic QDK’ (with this release being the ‘Modern QDK’), then this will be a substantial change. While we have endeavored to make the Q# code compatible where we could, the new architecture removes a lot of the prior project infrastructure, such as .csproj based projects, NuGet package distribution, C# integration, etc. Existing projects and samples will need to be ported to move from the ‘Classic QDK’ to the ‘Modern QDK’. The ‘Classic QDK’ will still be available to run existing code, but the ‘Modern QDK’ is the basis for future releases and we recommend moving to it when you can.
While it really has been fun and rewarding getting to 1.0, it is the beginning of a journey. We have many new features and improvements we are keen to start tackling, including improvements to the Q# language, more powerful resource estimation capabilities, package management for code sharing, advanced compiler capabilities (such as better hardware targeting), richer visualizations, better documentation & samples, and much more.
We’d love to have your input in these decisions, as you are who we are building these tools for, so please do get involved and give us your feature requests and feedback on our issue tracker at https://github.com/microsoft/qsharp/issues . (And if you do encounter any bugs with the QDK, this is the place to log those too!).
The team is very excited to reach this milestone, and hope you have as much fun using it as we did building it. Please do give it a try, give us your feedback, and tell us what you’d like to see next!
The post Announcing v1.0 of the Azure Quantum Development Kit appeared first on Q# Blog.
]]>The post Interning at Microsoft Quantum – 2024 appeared first on Q# Blog.
]]>We are excited to announce that applications for Microsoft Quantum’s research internships 2024 are open!
Apply for the Microsoft Quantum research internship
We encourage early applications!
Research internships target graduate students currently enrolled in a Master’s or a PhD program (note that you have to be enrolled as a student both at the time of application and at the time of the actual internship). These internships focus on the exploration of new research directions under guidance of fulltime researchers on our team. We are seeking candidates specializing in areas such as quantum algorithms, quantum chemistry, quantum error correction, quantum benchmarking, physics device modeling and characterization, and machine learning.
Here are several highlights from this year’s research internship projects:
You can find additional examples of research internship projects from earlier years and the papers written about them in the 2022 internships announcement.
Internships will be hosted at our offices in Redmond, WA, USA. International students are welcome to apply! (All interns must be able to obtain US work authorization.)
Our internships are a great opportunity to get familiar with the research done in the quantum industry and contribute to the work done by the Microsoft Quantum team. They also offer a lot of fun experiences as part of the greater Microsoft Internship program, from yearly puzzle events such as the Microsoft Puzzleday and the Microsoft Intern Game to the social events where you can meet your fellow interns and researchers from all over the company and learn about the variety of career paths available in different disciplines!
The post Interning at Microsoft Quantum – 2024 appeared first on Q# Blog.
]]>The post Defining logical qubits: Criteria for Resilient Quantum Computation appeared first on Q# Blog.
]]>The next step toward practical quantum advantage, and Level 3 Scale, is to demonstrate resilient quantum computation on a logical qubit. Resilience in this context means the ability to show that quantum error correction helps—rather than hinders—nontrivial quantum computation. However, an important element of this nontriviality is the interaction between logical qubits and the entanglement it generates, which means resilience of just one logical qubit will not be enough. Therefore, demonstrating two logical qubits performing an errorcorrected computation that outperforms the same computation on physical qubits will mark the first demonstration of a resilient quantum computation in our field’s history.
Before our industry can declare victory on reaching Level 2 Resilient Quantum Computing, by performing such a demonstration on a given quantum computing hardware, it’s important to agree on what this entails, and the path from there to Level 3 Scale.
The most meaningful definition of a logical qubit hinges on what one can do with that qubit – demonstrating a qubit that can only remain idle, that is, be preserved in memory, is not as meaningful as demonstrating a nontrivial operation. Therefore, we define a logical qubit such that it initially allows some nontrivial, encoded computation to be performed on it.
A significant challenge in formally defining a logical qubit is accounting for distinct hardware; for example, the definition should not favor one hardware over another. To address this, we propose a set of criteria that marks the entrance into the resilient level of quantum computation. In other words, these are the criteria for calling something a “logical qubit”.
Entrance criteria to Level 2
Graduating to Level 2 resilient quantum computing is achieved when fewer errors are observed on the output of a logical, errorcorrected quantum circuit than on the analogous physical circuit without error correction.[1] We also require that a resilient level demonstration include some uniquely “quantum” feature. Otherwise, the demonstration reduces to a simply novel demonstration of probabilistic bits.
Arguably the most natural “quantum” feature to demonstrate in this regard is entanglement. A demonstration of the resilient level of quantum computation should then satisfy the following criteria:
Upon satisfaction of these criteria, the term “logical qubit” can then be used to refer to the encoded qubits involved.
The distinction between the Resilient and Scale levels is worth emphasizing — a proof of principle demonstration of resiliency must be convincing, but it does not require a fully scaled machine. For this reason, a resilient level demonstration may use certain forms of postselection. Postselection here means the ability to accept only those runs that satisfy specific criteria. Importantly, the chosen postselection method must not replace errorcorrection altogether, as errorcorrection is central to the type of resiliency that Level 2 aims to demonstrate.
Measuring progress across Level 2
Once entrance to the Resilient Level is achieved, as an industry we need to be able to measure continued progress toward Level 3. Not every type of quantum computing hardware will achieve Level 3 Scale; the requirements to reach practical quantum advantage at Level 3 include achieving upwards of 1000 logical qubits operating at a megarQOPS with logical error rates better than 10^{12}. And so it is critical to be able to understand advancements within Level 2 toward these requirements.
Inspired in part by DiVincenzo’s criteria, we propose to measure progress along four axes: universality, scalability, fidelity, composability. For each axis we offer the following ideas on how to measure it, with hopes the community will build on them:
Criteria to advance from Level 2 to Level 3 Scale
The exit of the resilient level of logical computation will be marked by large depth, high fidelity computations involving upwards of hundreds of logical qubits. For example, a logical, faulttolerant computation on ~100 logical qubits or more with a universal set of composable logical operations with an error rate of ~10^{8} or better will be necessary. At Level 3, performance of a quantum supercomputer can then be measured by reliable quantum operations per second (rQOPS). Ultimately, a quantum supercomputer will be achieved once the machine is able to demonstrate 1000 logical qubits operating at a megarQOPS with logical error rate of 10^{12} or better.
It’s no doubt an exciting time to be in quantum computing. Our industry is at the brink of reaching the next implementation level, Level 2, which puts our industry on path to ultimately achieving practical quantum advantage. Together as a community we have an opportunity to help measure progress across Level 2, and to introduce benchmarks for the industry. If you have ideas or feedback on criteria to enter Level 2, or how to measure progress, we’d love to hear from you.
[1] Our criteria build on and complement criteria of both DiVincenzo (DiVincenzo, David P. (20000413). “The Physical Implementation of Quantum Computation”. Fortschritte der Physik. 48 (9–11): 771–783) and Gottesman (Gottesman, Daniel. (201610). “Quantum fault tolerance in small experiments”. https://arxiv.org/abs/1610.03507), who have previously outlined important criteria for achieving quantum computing and its fault tolerance.
The post Defining logical qubits: Criteria for Resilient Quantum Computation appeared first on Q# Blog.
]]>The post Calculating resource estimates for cryptanalysis appeared first on Q# Blog.
]]>This blog offers an inside look into the computation of these estimates. Our resource estimator supports various input formats for quantum programs, including Q# and Qiskit, which are then translated into QIR, the Quantum Intermediate Representation. In addition to customizable qubit parameters, we also utilize predefined models in our experience. To perform resource estimation of physical hardware components from logical resource counts (which do not take the overhead for quantum error correction into account) extracted from papers, we utilize a specialized resource estimation operation in Q#. Furthermore, we have developed an algorithm in Rust and translated it into QIR by leveraging the LLVM framework, which also powers QIR. The following three sections delve into the specific details for each encryption algorithm addressed in our interactive experience.
In the experience we compare the following three cryptographic algorithms in different key strengths (for elliptic curve cryptography, these correspond to concrete prime field Weierstrass curves, which you can lookup via the link):
Algorithm  Standard  Enhanced  Highest 

Elliptic curve  P256  P384  P521 
RSA  2048  3072  4096 
AES  128  192  256 
In the estimation, we assume that we lower the quantum algorithm to a sequence of physical quantum gates. For these we assume the following two choices of qubits and error rates. The values are based on some predefined qubit parameters available in the resource estimator. The Majorana and gatebased predefined parameters in the resource estimator correspond to topological and superconducting qubit types in the experience, respectively.
Qubit type and error rate  Majorana (reasonable)  Majorana (optimistic)  Gatebased (reasonable)  Gatebased (optimistic) 

Measurement time  100 ns  100 ns  100 ns  100 ns 
Gate time  100 ns  100 ns  50 ns  50 ns 
Measurement error rate  0.0001  0.000001  0.001  0.0001 
Gate error rate  0.05  0.01  0.001  0.0001 
Elliptic curve cryptography (ECC) is a publickey cryptography approach based on the algebraic structure of elliptic curves. The approach requires smaller key sizes compared to approaches such as RSA, while providing an equal security against classical cryptanalysis methods. The paper Improved quantum circuits for elliptic curve discrete logarithms (arXiv:2001.09580) describes a quantum algorithm to solve the elliptic curve discrete logarithm problem (ECDLP) based on Shor’s algorithm. We make use of the Q# operation AccountForEstimates
(also find details on how to use the operation) that allows us to derive physical resource estimates from previously computed logical ones. This operation is very helpful when logical estimates have already been computed, as for example in this paper and listed in there as part of Table 1.
From that table we extract the relevant metrics, which are the number of T gates, the number of measurement operations, and the number of qubits. The other metrics are not relevant for the computation, since the physical resource estimation relies on Parallel Synthesis Sequential Pauli Computation (PSSPC, Appendix D in arXiv:2211.07629), which commutes all Clifford operations and replaces them by multiqubit Pauli measurements. The paper discusses various optimization flags in the implementation to minimize the logical qubit count, T count, or the logical depth. We found that the physical resource estimates are best, both for physical qubits and runtime, when using the option to minimize qubit count. The following Q# program includes the estimates for the considered key sizes 256, 384, and 521.
open Microsoft.Quantum.ResourceEstimation;
operation ECCEstimates(keysize: Int) : Unit {
if keysize == 256 {
use qubits = Qubit[2124];
AccountForEstimates([
TCount(7387343750), // 1.72 * 2.0^32
MeasurementCount(118111601) // 1.76 * 2.0^26
], PSSPCLayout(), qubits);
} else if keysize == 384 {
use qubits = Qubit[3151];
AccountForEstimates([
TCount(25941602468), // 1.51 * 2.0^34
MeasurementCount(660351222) // 1.23 * 2.0^29
], PSSPCLayout(), qubits);
} else if keysize == 521 {
use qubits = Qubit[4258];
AccountForEstimates([
TCount(62534723830), // 1.82 * 2.0^35
MeasurementCount(1707249501) // 1.59 * 2.0^30
], PSSPCLayout(), qubits);
} else {
fail $"keysize {keysize} is not supported";
}
}
We can estimate this Q# program by submitting it to an Azure Quantum workspace using the azure_quantum Python package. To do so, we are setting up a connection to an Azure Quantum workspace (Learn how to create a workspace). You can find the values for resource_id and location in the Overview page of the Quantum workspace. (The complete code example is available on GitHub)
workspace = Workspace(
resource_id="",
location=""
)
estimator = MicrosoftEstimator(workspace)
We then define the input parameters for the job. In there we specify the key size, here 256. We use batching to submit multiple target parameter configurations at once. In here we specify the four configurations that correspond to the realistic and optimistic settings for both gatebased and Majorana qubits. For all configurations, we set the error budget to 0.333, i.e., we compute physical resource estimates considering a success rate about 67%.
params = estimator.make_params(num_items=4)
params.arguments["keysize"] = 256
# Error budget
params.error_budget = 0.333
# Gatebased (realistic)
params.items[0].qubit_params.name = QubitParams.GATE_NS_E3
# Gatebased (optimistic)
params.items[1].qubit_params.name = QubitParams.GATE_NS_E4
# Majorana (realistic)
params.items[2].qubit_params.name = QubitParams.MAJ_NS_E4
params.items[2].qec_scheme.name = QECScheme.FLOQUET_CODE
# Majorana (optimistic)
params.items[3].qubit_params.name = QubitParams.MAJ_NS_E6
params.items[3].qec_scheme.name = QECScheme.FLOQUET_CODE
Finally, we create a job by submitting the Q# operation together with the input parameters, and retrieve the results after it has completed. We then use the result object to create a summary table using the summary_data_frame
function. The table contains various entries, but in this example, we only print the numbers of physical qubits and physical runtimes, the same that are plotted in the experience on the Azure Quantum website.
job = estimator.submit(ECCEstimates, input_params=params)
results = job.get_results()
table = results.summary_data_frame(labels=[
"Gatebased (reasonable)",
"Gatebased (optimistic)",
"Majorana (reasonable)",
"Majorana (optimistic)"
])
print()
print(table[["Physical qubits", "Physical runtime"]])
The output is as follows:
Physical qubits Physical runtime
Gatebased (reasonable) 5.87M 21 hours
Gatebased (optimistic) 1.54M 11 hours
Majorana (reasonable) 3.69M 8 hours
Majorana (optimistic) 1.10M 4 hours
The estimates in the table are formatted for better readability. You can also retrieve the nonformatted values, e.g., the number of physical qubits and physical items for the first configuration (gatebased realistic) are access with results[0]["physicalCounts"]["physicalQubits"]
and results[0]["physicalCounts"]["runtime"]
, respectively.
RSA is one of the oldest, yet widely used, publickey cryptography approaches. The paper How to factor 2048 bit RSA integers in 8 hours using 20 million noisy qubits (arXiv:1905.09749) describes an implementation to factor RSA integers based on stateoftheart quantum operations for phase estimation and quantum arithmetic. The code is mostly similar to the code we are using for ECC estimates described above. However, we implemented the algorithm in Rust, and compiled it to LLVM. Therefore, we submit the QIR, which is the LLVM output, directly to the Azure Quantum Resource Estimator. (The complete code example is available on GitHub)
import urllib.request
bitcode = urllib.request.urlopen("https://aka.ms/RE/someniceuri").read()
The entry point in this implementation takes 4 input arguments, the actual product (in this sample the 2048bit RSA integer from the RSA challenge), a generator, and two parameters to control windowed arithmetic in the implementation. We take its values from the paper, in which 5 is suggested a good value for both of them. Then, we configure the qubit parameters and QEC scheme as above in the input parameters, and submit them together with the bitcode to the resource estimator.
params = estimator.make_params(num_items=4)
params.arguments["product"] = "25195908475657893494027183240048398571429282126204032027777137836043662020707595556264018525880784406918290641249515082189298559149176184502808489120072844992687392807287776735971418347270261896375014971824691165077613379859095700097330459748808428401797429100642458691817195118746121515172654632282216869987549182422433637259085141865462043576798423387184774447920739934236584823824281198163815010674810451660377306056201619676256133844143603833904414952634432190114657544454178424020924616515723350778707749817125772467962926386356373289912154831438167899885040445364023527381951378636564391212010397122822120720357"
params.arguments["generator"] = 7
params.arguments["exp_window_len"] = 5
params.arguments["mul_window_len"] = 5
# specify error budget, qubit parameter and QEC scheme assumptions
params.error_budget = 0.333
# ...
job = estimator.submit(bitcode, input_params=params)
results = job.get_results()
The code for evaluating the data is the same and returns the following table:
Physical qubits Physical runtime
Gatebased (reasonable) 25.17M 1 days
Gatebased (optimistic) 5.83M 12 hours
Majorana (reasonable) 13.40M 9 hours
Majorana (optimistic) 4.18M 5 hours
We can use the same program to compute resource estimates for other RSA integers, including the RSA challenge numbers RSA3072 and RSA4096, whose estimates are part of the cryptography experience on the Azure Quantum website.
The Advanced Encryption Standard (AES) is a symmetrickey algorithm and a standard for the US federal government. In order to obtain the physical resource estimates for breaking AES, we started from the logical estimates in Implementing Grover oracles for quantum key search on AES and LowMC (arXiv:1910.01700, Table 8), with updates on the qubit counts suggested in Quantum Analysis of AES (Cryptology ePrint Archive, Paper 2022/683, Table 7). In principle, we can follow the approach using the AccountForEstimates
function as we did for ECC. This operation and the logical counts in the Azure Quantum Resource Estimator are represented using 64bit integers for performance reasons, however, for the AES estimates we need 256bit integers. As a result we used an internal nonproduction version of the resource estimator that can handle this precision. Further details can be made available to researchers if you run into similar precision issues in your resource estimation projects.
The Azure Quantum Resource Estimator can be applied to estimate any quantum algorithm, not only cryptanalysis. Learn how to get started in Azure Quantum today with the Azure Quantum documentation. In there you find how to explore all the rich capabilities in various notebooks, with applications in quantum chemistry, quantum simulation, and arithmetic. You can learn how to submit your own quantum programs written in Q#, Qiskit, or directly provided as QIR, as well as how to set up advanced resource estimation experiments and apply customizations such as space/time tradeoffs.
The post Calculating resource estimates for cryptanalysis appeared first on Q# Blog.
]]>