The post Exploring spacetime tradeoffs with Azure Quantum Resource Estimator appeared first on Q# Blog.
]]>We are delighted to present a new experience for exploring spacetime tradeoffs recently added to the Azure Quantum Resource Estimator. Available both as Azure Quantum Development Kit (VS Code extension) and a Python package, it adds a new dimension to estimates.
Resource estimation doesn’t just yield a single group of numbers (one per objective), but rather multiple points representing tradeoffs between objectives, such as qubit number and runtime. Our recent update of the Azure Quantum Resource Estimator adds methods for finding such tradeoffs for a given quantum algorithm and a given quantum computing stack. We also provide a visual experience to navigate alternatives with an interactive chart and supplementary reports and diagrams:
This chart illustrates tradeoffs between qubit numbers and runtimes required for running the same algorithm across multiple projected quantum computers. See estimationfrontierwidgets.ipynb for steps to learn how to generate this diagram.
More specifically, we have considered the simulation of the dynamics of a quantum magnet, the socalled Ising model on a square 10×10 lattice. This is the simplest model for ferromagnetism in a quantum system, and the algorithm simulates its evolution over time. At this system size the problem cannot be simulated on classic computers in reasonable time and solutions with computers would be highly desired.
The diagram above and the table
show that this algorithm requires 230 logical qubits with low error rates. Such logical qubits don’t exist yet, and it will require hundreds of noisy physical qubits per each logical. So, the total number of high error rate physical qubits requiring for the simulation ranges from 33,000 to 261,340.
You also can notice on the chart that increasing the number of utilized physical qubits by 1035 times reduces the runtime 120250 times. A thoughtful analysis of tradeoffs for entire algorithms and for subroutines can save a lot of runtime if extra qubit resources are available.
Compromises between the number of physical qubits and the runtime in quantum computations are like ones between space and time utilization in classic computing. As we have done above, for a given algorithm, one can start estimates by computing the minimal number of physical qubits required for its execution on a given quantum stack, and then deduce the corresponding runtime. If more physical qubits are available, one can accelerate the runtime by parallelizing execution of the algorithm or its subroutines.
One can build multiple estimates by allowing more and more physical qubits and improving the runtime. We can consider efficient estimates, such that in each pair of estimates, one would be better than the other with respect to runtime, and the other would be better with respect to the number of physical qubits. The set of such estimates forms the socalled Pareto frontier which is represented by monotonous decreasing plots on the spacetime diagram.
Just as in classical programs, there are many opportunities for spacetime tradeoffs in the choice of quantum algorithms and their implementation. Here we want to discuss another, quantumspecific opportunity. Rotation gates that rotate logical qubits by arbitrary angles require socalled magic states which are generated in a process known as Magic State Distillation happening in a set of qubits called the “magic state factories”. Many quantum stacks use the T gate as the only magic gate, and corresponding states and factories become Tstates and Tfactories, and we use those names in the Resource Estimator.
Tstate generation subroutines are executed in parallel with the main algorithm. Let us start with a single Tfactory. For some algorithms, it could produce enough Tstates corresponding for the algorithm consumption. For other algorithms requiring more Tstates, the algorithm execution will be slowed down, waiting for the next Tstate to be produced. Note that idling of an algorithm is not free in the quantum world because errors in quantum states will occur while waiting. Longer runtimes might thus require a higher error correction code distance and with it more physical qubits and longer runtimes than might naively be estimated.
If an algorithm waits for new Tstates and there are more qubits available, we can add additional Tfactories to produce more Tstates. This saves runtime at the cost of more physical qubits. Having enough physical qubits available, we can increase the number of Tfactories until they produce enough Tstates for algorithm consumption without idling. This will give the shortest runtime of the algorithm. For example, the algorithm considered above could efficiently use up to 172251 Tfactories depending on the computing stack. This involves spending from 92.29% to 98.40% of its resources for Tstates distillation.
As shown in estimationfrontierwidgets.ipynb, to estimate resources required to run a Q# program, one has to run
result = qsharp.estimate(entry_expression, params)
where the “entry_expression” refers to the entry point of the program and params could cover multiple quantum stack configurations and estimation parameters as well.
When “estimateType”: “frontier” is set, the estimator searches for the whole frontier of estimates, otherwise, it looks for the shortest runtime solution only.
Executing the
EstimatesOverview(result)
command visualizes all the estimates found in result (frontier and individual as well) with the spacetime diagram and the summary table.
Selecting rows on the summary table or point on the spacetime diagram generates the space diagram and the detailed report:
“EstimatesOverview” supports optional parameters for custom color schemes on the spacetime diagram and custom series name for the summary table.
More tips and tricks for the “EstimatesOverview” and for supplementary visualization elements are available at estimationfrontierwidgets.ipynb.
Estimating resources for quantum algorithm executions goes beyond providing a single pair of numbers — the runtime and the number of physical qubits. It requires constructing and analyzing the entire frontier of tradeoffs between those objectives. The Azure Quantum Resource Estimator allows you to build and explore those tradeoff frontiers and more accurately evaluate your requirements. With this new data, you can determine if you need to improve your algorithm, develop new error correction codes, or explore alternate qubit technologies.
The Azure Quantum team is committed to continuous improvements in the Resource Estimator. This tool supports both our internal teams and external researchers in the pursuit of designing quantum computers.
Our primary focus is on enhancing the precision of estimates and offering expanded estimation capabilities.
We eagerly welcome your feedback on the specific custom options you require for estimating your quantum computer resources. Your insights will play a vital role in refining our tool, making it even more effective for the entire quantum community.
The post Exploring spacetime tradeoffs with Azure Quantum Resource Estimator appeared first on Q# Blog.
]]>The post Design Fault Tolerant Quantum Computing applications with the opensource Resource Estimator appeared first on Q# Blog.
]]>Quantum computing has the potential for widespread societal and scientific impact, and many applications have been proposed for quantum computers. The quantum community has reached a consensus that NISQ machines do not offer practical quantum advantage and that it is time to graduate to the next of the three implementation levels.
Unlike computing with transistors, basic operations with qubits are much more complicated and an order of magnitude slower. We now understand that practical quantum advantage will be achieved for smalldata problems that offer superpolynomial quantum speedup (see T. Hoefler et al, CACM 66, 8287). This includes, specifically, the simulation of quantum systems in quantum physics, chemistry, and materials science.
But at a basic level, there are still many remaining open questions: What are the most promising and useful quantum algorithms on which to build useful quantum applications? Which quantum computing architectures and qubit technologies can reach the necessary scale to run such quantum accelerated applications? Which qubit technologies are well suited to practical quantum supercomputers? Which quantum computing technologies are unlikely to achieve the necessary scale?
That’s why we need the Resource Estimator to help us answer these questions and guide today’s research and development toward logical qubit applications.
Achieving practical quantum advantage will require improvements and domain expertise at every level of the quantum computing stack. A unified opensource tool to benchmark solutions and collaborate across disciplines will speed up our path toward a quantum supercomputer: this is the premise of Azure Quantum Resource Estimator.
Whether you are developing applications, researching algorithms, designing language compilers and optimizers, creating new error correction codes, or working on R&D for faster, smaller and more reliable qubits, the Resource Estimator helps you assess how your theoretical or empirical enhancements can improve the whole stack.
As an individual researcher, you can leverage prebuilt options to focus on your area. If you are part of a team, you can work collectively at every level of the stack and see the results of your combined efforts.
The Resource Estimator is an estimation platform that lets you start with minimal inputs, abstracting the many specificities of quantum systems. If you require more control, you can adjust and explore a vast number of system characteristics.
The Resource Estimator can quickly explore thousands of possible solutions. This accelerates the development lifecycle and lets you easily review tradeoffs between computation time and number of physical qubits.
The table below summarizes some of the ways you can adapt the Resource Estimator to your needs, allowing you to specify both the description of the quantum system and to control the exploration of estimates. Explore all available parameters.
Describe your system  Explore and control estimates 


*Currently requires an Azure Subscription
If you are ready to get started, you can choose from:
Read more from the documentation.
To join the discussion or contribute to the development of the Resource Estimator, visit https://aka.ms/AQ/RE/OpenSource.
20240129 update: This feature is now available. Learn more from the Pareto frontier documentation.
Understanding the tradeoff between runtime and system scale is one of the more important aspects of resource estimation. To help you better understand and visualize the tradeoffs, the Resource Estimator will soon provide fully automated exploration and graphics, such as the one below:
Make sure to subscribe to the Q# blog to be notified of this feature’s availability.
The post Design Fault Tolerant Quantum Computing applications with the opensource Resource Estimator appeared first on Q# Blog.
]]>The post Announcing v1.0 of the Azure Quantum Development Kit appeared first on Q# Blog.
]]>As outlined in an earlier blog post, this is a significant rewrite over the prior QDK with an emphasis on speed, simplicity, and a delightful experience. Review that post for the technical details on how we rebuilt it, but at a product level the rewrite has enabled us to make some incredible improvements that exceeded the expectations we set out with, some highlights being:
And much more! This post will include lots of video clips to try and highlight some of these experiences (all videos were recorded in real time).
For the fastest getting started experience, just go to https://vscode.dev/quantum/playground/ . The QDK extension for VS Code works fully in VS Code for the Web, and this URL loads an instance of VS Code in the browser with the QDK extension preinstalled, along with a virtual file system preloaded with some common quantum algorithms. You can experiment here, then simply close the browser tab when done, without installing anything or accessing any files on your local machine.
If using VS Code on your local machine (or using https://vscode.dev directly), then installing the extension is a snap. Simply go to the VS Code Extension Marketplace, search for “QDK”, and install the “Azure Quantum Development Kit” extension published by “Microsoft DevLabs” (direct link). The extension is lightweight with no dependencies and will install in seconds, as shown below.
Once the extension is running, you can open a Q# file (with a .qs extension) and start coding. The below clip demonstrates how to create a new Q# file, use one of the sample ‘snippets’ to quickly insert a wellknown algorithm, and then use the built in simulator to run the code and see the output (including quantum state dumps and debug messages).
(Note: If unfamiliar with Q#, or quantum development in general, then the Quantum Katas are a great way to learn in an interactive AI assisted experience).
We believe the true power of quantum computing will be realized once we reach “scalable quantum computing”, and the Q# language was designed for this. It includes both higher level abstractions to more naturally express quantum operations, as well as being a typed language to help develop, refactor, and collaborate on more complex programs. (See the “Why do we need Q#” blog post for more background).
For this release we’ve invested heavily on the editor features developers expect from a modern and productive language. This includes:
The Q# editor provides completion lists, autoopen of namespaces, signature help, hover information, goto definition, rename identifier, syntax and typechecking errors, and more! All behave as developers familiar with other strongly typed languages such as Rust, C#, TypeScript, etc. have come to expect.
We’ve designed the experience to be as smooth as possible and to work as fast as you can type. Many of these features are available not only while editing Q# files directly, but also when writing Q# code in Jupyter Notebook cells, as shown in the clip below.
A quantum simulator is critical when developing quantum programs, and the QDK includes a sparse simulator that enables the output of diagnostic messages and quantum state as it runs in both the VS Code extension and the Python package.
The VS Code integration takes this up a notch, and the QDK brings a powerful debugging experience to Q# development. You can set breakpoints, step in and out of operations, and view both the quantum and classical state as you step through the code. It also includes some quantumspecific goodness, such as stepping through loops & operations backwards when running the generated adjoint of an operation. We’re very excited about the productivity this can unlock, and some of the ideas for where we could take it even further in future releases.
Today’s quantum hardware is still quite limited in terms of practical application, and we are still in what is termed the “Noisy Intermediate Scale Quantum” era. We consider this Level 1 in a roadmap to a quantum supercomputer. The industry is making great strides towards Level 2 currently, when it will become possible to start using “logical qubits” on real hardware. Achieving practical quantum advantage for useful problems will require logical qubits.
As with early classical computers, there will be considerable resource constraints for a number of years. (My first computer had 16KB of RAM and a cassette tape for storage!). Developing code that can squeeze the most out of the hardware will be critical to building useful applications and to advancing the field generally. There are numerous factors such as qubit types, error correction schemas, layout & connectivity, etc. that determine how a program using logical qubits maps to physical resource requirements.
Over the past year we’ve built numerous capabilities into our Azure Quantum service to assist with Resource Estimation (see the docs for details). With this release of the QDK, we’re bringing many of those capabilities directly into the client, enabling a rapid getting started experience and a very fast innerloop to enable quantum developers to experiment and view resource requirements for their code as quickly as possible. This is an area we will continue to invest in to add capabilities for developers & researchers throughout the quantum stack to make rapid progress and develop new insights.
In the below clip showing VS Code in the browser, the “Calculate Resource Estimates” command is run to view the estimates for various qubit types and other parameters. Once complete, this brings up a comparison table, and as rows are selected a visualization chart and detailed table of results is shown for the selected hardware configuration.
If you’d like to try this exact code in the Resource Estimator, you can visit the code sharing link used in the video. (Note this code is designed for resource estimation and is unlikely to finish if you try to actually run it in the simulator).
The QDK extension in VS Code enables you to connect to a Quantum Workspace in your Azure Subscription. You can then directly submit your Q# program from the editor to one of our hardware partners. You can see the status of the jobs and download the results when completed. This provides for a simple and streamlined experience, reducing the need to switch to CLI tools or Python code to work with the service. (Though using the service via those methods is still fully supported).
Current quantum hardware is limited compared to simulator capabilities, and thus the compiler must be set to the ‘base’ profile in the QDK for programs to be runnable on a real quantum machine. If the compiler is set to ‘base’ profile and a program tries to use unavailable capabilities, then the editor will immediately show an error, avoiding the need to submit potentially invalid code and then wait to see if an error occurs from the service.
Note: VS Code had already signedin and authenticated with the subscription account in this recording. On first run you may need to authenticate with the Microsoft Account for the subscription and consent to access.
There are more editor features than can be covered here, including builtin histograms, project support, viewing the QIR for a compiled program, etc. See the documentation for more details.
Much work in the quantum space happens via Python in Jupyter Notebooks. Beyond the rich tooling for working with Q# directly, we’ve also revamped and refined our Python packages and Jupyter Notebooks support.
For general Q# simulation and compilation all you need is “pip install qsharp”. This package is only a couple of MB with no dependencies, and compiled to binary wheels for Windows, Mac, and Linux for x64 and ARM64 – so installation should be pain free and near instant in most environments. If you will be using Jupyter Notebooks then you may also want to install the “qsharpwidgets” package for some nice visualizations for resource estimation and histograms.
If you will be using JupyterLab in the browser, then install the ‘qsharpjupyterlab’ package to get Q# cell syntax highlighting. However, we recommend using the VS Code support for Jupyter Notebooks as this provides some of the rich language service features outlined above when working with Q#.
You can use the VS Code command “Create an Azure Quantum Notebook” to generate a sample Jupyter Notebook. If you have connected to an Azure Quantum Workspace already as outlined above, then this Notebook will be prepopulated with the correct Azure Quantum Workspace connection settings.
If you were using the prior QDK, which we now refer to as the ‘Classic QDK’ (with this release being the ‘Modern QDK’), then this will be a substantial change. While we have endeavored to make the Q# code compatible where we could, the new architecture removes a lot of the prior project infrastructure, such as .csproj based projects, NuGet package distribution, C# integration, etc. Existing projects and samples will need to be ported to move from the ‘Classic QDK’ to the ‘Modern QDK’. The ‘Classic QDK’ will still be available to run existing code, but the ‘Modern QDK’ is the basis for future releases and we recommend moving to it when you can.
While it really has been fun and rewarding getting to 1.0, it is the beginning of a journey. We have many new features and improvements we are keen to start tackling, including improvements to the Q# language, more powerful resource estimation capabilities, package management for code sharing, advanced compiler capabilities (such as better hardware targeting), richer visualizations, better documentation & samples, and much more.
We’d love to have your input in these decisions, as you are who we are building these tools for, so please do get involved and give us your feature requests and feedback on our issue tracker at https://github.com/microsoft/qsharp/issues . (And if you do encounter any bugs with the QDK, this is the place to log those too!).
The team is very excited to reach this milestone, and hope you have as much fun using it as we did building it. Please do give it a try, give us your feedback, and tell us what you’d like to see next!
The post Announcing v1.0 of the Azure Quantum Development Kit appeared first on Q# Blog.
]]>The post Interning at Microsoft Quantum – 2024 appeared first on Q# Blog.
]]>We are excited to announce that applications for Microsoft Quantum’s research internships 2024 are open!
Apply for the Microsoft Quantum research internship
We encourage early applications!
Research internships target graduate students currently enrolled in a Master’s or a PhD program (note that you have to be enrolled as a student both at the time of application and at the time of the actual internship). These internships focus on the exploration of new research directions under guidance of fulltime researchers on our team. We are seeking candidates specializing in areas such as quantum algorithms, quantum chemistry, quantum error correction, quantum benchmarking, physics device modeling and characterization, and machine learning.
Here are several highlights from this year’s research internship projects:
You can find additional examples of research internship projects from earlier years and the papers written about them in the 2022 internships announcement.
Internships will be hosted at our offices in Redmond, WA, USA. International students are welcome to apply! (All interns must be able to obtain US work authorization.)
Our internships are a great opportunity to get familiar with the research done in the quantum industry and contribute to the work done by the Microsoft Quantum team. They also offer a lot of fun experiences as part of the greater Microsoft Internship program, from yearly puzzle events such as the Microsoft Puzzleday and the Microsoft Intern Game to the social events where you can meet your fellow interns and researchers from all over the company and learn about the variety of career paths available in different disciplines!
The post Interning at Microsoft Quantum – 2024 appeared first on Q# Blog.
]]>The post Defining logical qubits: Criteria for Resilient Quantum Computation appeared first on Q# Blog.
]]>The next step toward practical quantum advantage, and Level 3 Scale, is to demonstrate resilient quantum computation on a logical qubit. Resilience in this context means the ability to show that quantum error correction helps—rather than hinders—nontrivial quantum computation. However, an important element of this nontriviality is the interaction between logical qubits and the entanglement it generates, which means resilience of just one logical qubit will not be enough. Therefore, demonstrating two logical qubits performing an errorcorrected computation that outperforms the same computation on physical qubits will mark the first demonstration of a resilient quantum computation in our field’s history.
Before our industry can declare victory on reaching Level 2 Resilient Quantum Computing, by performing such a demonstration on a given quantum computing hardware, it’s important to agree on what this entails, and the path from there to Level 3 Scale.
The most meaningful definition of a logical qubit hinges on what one can do with that qubit – demonstrating a qubit that can only remain idle, that is, be preserved in memory, is not as meaningful as demonstrating a nontrivial operation. Therefore, we define a logical qubit such that it initially allows some nontrivial, encoded computation to be performed on it.
A significant challenge in formally defining a logical qubit is accounting for distinct hardware; for example, the definition should not favor one hardware over another. To address this, we propose a set of criteria that marks the entrance into the resilient level of quantum computation. In other words, these are the criteria for calling something a “logical qubit”.
Entrance criteria to Level 2
Graduating to Level 2 resilient quantum computing is achieved when fewer errors are observed on the output of a logical, errorcorrected quantum circuit than on the analogous physical circuit without error correction.[1] We also require that a resilient level demonstration include some uniquely “quantum” feature. Otherwise, the demonstration reduces to a simply novel demonstration of probabilistic bits.
Arguably the most natural “quantum” feature to demonstrate in this regard is entanglement. A demonstration of the resilient level of quantum computation should then satisfy the following criteria:
Upon satisfaction of these criteria, the term “logical qubit” can then be used to refer to the encoded qubits involved.
The distinction between the Resilient and Scale levels is worth emphasizing — a proof of principle demonstration of resiliency must be convincing, but it does not require a fully scaled machine. For this reason, a resilient level demonstration may use certain forms of postselection. Postselection here means the ability to accept only those runs that satisfy specific criteria. Importantly, the chosen postselection method must not replace errorcorrection altogether, as errorcorrection is central to the type of resiliency that Level 2 aims to demonstrate.
Measuring progress across Level 2
Once entrance to the Resilient Level is achieved, as an industry we need to be able to measure continued progress toward Level 3. Not every type of quantum computing hardware will achieve Level 3 Scale; the requirements to reach practical quantum advantage at Level 3 include achieving upwards of 1000 logical qubits operating at a megarQOPS with logical error rates better than 10^{12}. And so it is critical to be able to understand advancements within Level 2 toward these requirements.
Inspired in part by DiVincenzo’s criteria, we propose to measure progress along four axes: universality, scalability, fidelity, composability. For each axis we offer the following ideas on how to measure it, with hopes the community will build on them:
Criteria to advance from Level 2 to Level 3 Scale
The exit of the resilient level of logical computation will be marked by large depth, high fidelity computations involving upwards of hundreds of logical qubits. For example, a logical, faulttolerant computation on ~100 logical qubits or more with a universal set of composable logical operations with an error rate of ~10^{8} or better will be necessary. At Level 3, performance of a quantum supercomputer can then be measured by reliable quantum operations per second (rQOPS). Ultimately, a quantum supercomputer will be achieved once the machine is able to demonstrate 1000 logical qubits operating at a megarQOPS with logical error rate of 10^{12} or better.
It’s no doubt an exciting time to be in quantum computing. Our industry is at the brink of reaching the next implementation level, Level 2, which puts our industry on path to ultimately achieving practical quantum advantage. Together as a community we have an opportunity to help measure progress across Level 2, and to introduce benchmarks for the industry. If you have ideas or feedback on criteria to enter Level 2, or how to measure progress, we’d love to hear from you.
[1] Our criteria build on and complement criteria of both DiVincenzo (DiVincenzo, David P. (20000413). “The Physical Implementation of Quantum Computation”. Fortschritte der Physik. 48 (9–11): 771–783) and Gottesman (Gottesman, Daniel. (201610). “Quantum fault tolerance in small experiments”. https://arxiv.org/abs/1610.03507), who have previously outlined important criteria for achieving quantum computing and its fault tolerance.
The post Defining logical qubits: Criteria for Resilient Quantum Computation appeared first on Q# Blog.
]]>The post Calculating resource estimates for cryptanalysis appeared first on Q# Blog.
]]>This blog offers an inside look into the computation of these estimates. Our resource estimator supports various input formats for quantum programs, including Q# and Qiskit, which are then translated into QIR, the Quantum Intermediate Representation. In addition to customizable qubit parameters, we also utilize predefined models in our experience. To perform resource estimation of physical hardware components from logical resource counts (which do not take the overhead for quantum error correction into account) extracted from papers, we utilize a specialized resource estimation operation in Q#. Furthermore, we have developed an algorithm in Rust and translated it into QIR by leveraging the LLVM framework, which also powers QIR. The following three sections delve into the specific details for each encryption algorithm addressed in our interactive experience.
In the experience we compare the following three cryptographic algorithms in different key strengths (for elliptic curve cryptography, these correspond to concrete prime field Weierstrass curves, which you can lookup via the link):
Algorithm  Standard  Enhanced  Highest 

Elliptic curve  P256  P384  P521 
RSA  2048  3072  4096 
AES  128  192  256 
In the estimation, we assume that we lower the quantum algorithm to a sequence of physical quantum gates. For these we assume the following two choices of qubits and error rates. The values are based on some predefined qubit parameters available in the resource estimator. The Majorana and gatebased predefined parameters in the resource estimator correspond to topological and superconducting qubit types in the experience, respectively.
Qubit type and error rate  Majorana (reasonable)  Majorana (optimistic)  Gatebased (reasonable)  Gatebased (optimistic) 

Measurement time  100 ns  100 ns  100 ns  100 ns 
Gate time  100 ns  100 ns  50 ns  50 ns 
Measurement error rate  0.0001  0.000001  0.001  0.0001 
Gate error rate  0.05  0.01  0.001  0.0001 
Elliptic curve cryptography (ECC) is a publickey cryptography approach based on the algebraic structure of elliptic curves. The approach requires smaller key sizes compared to approaches such as RSA, while providing an equal security against classical cryptanalysis methods. The paper Improved quantum circuits for elliptic curve discrete logarithms (arXiv:2001.09580) describes a quantum algorithm to solve the elliptic curve discrete logarithm problem (ECDLP) based on Shor’s algorithm. We make use of the Q# operation AccountForEstimates
(also find details on how to use the operation) that allows us to derive physical resource estimates from previously computed logical ones. This operation is very helpful when logical estimates have already been computed, as for example in this paper and listed in there as part of Table 1.
From that table we extract the relevant metrics, which are the number of T gates, the number of measurement operations, and the number of qubits. The other metrics are not relevant for the computation, since the physical resource estimation relies on Parallel Synthesis Sequential Pauli Computation (PSSPC, Appendix D in arXiv:2211.07629), which commutes all Clifford operations and replaces them by multiqubit Pauli measurements. The paper discusses various optimization flags in the implementation to minimize the logical qubit count, T count, or the logical depth. We found that the physical resource estimates are best, both for physical qubits and runtime, when using the option to minimize qubit count. The following Q# program includes the estimates for the considered key sizes 256, 384, and 521.
open Microsoft.Quantum.ResourceEstimation;
operation ECCEstimates(keysize: Int) : Unit {
if keysize == 256 {
use qubits = Qubit[2124];
AccountForEstimates([
TCount(7387343750), // 1.72 * 2.0^32
MeasurementCount(118111601) // 1.76 * 2.0^26
], PSSPCLayout(), qubits);
} else if keysize == 384 {
use qubits = Qubit[3151];
AccountForEstimates([
TCount(25941602468), // 1.51 * 2.0^34
MeasurementCount(660351222) // 1.23 * 2.0^29
], PSSPCLayout(), qubits);
} else if keysize == 521 {
use qubits = Qubit[4258];
AccountForEstimates([
TCount(62534723830), // 1.82 * 2.0^35
MeasurementCount(1707249501) // 1.59 * 2.0^30
], PSSPCLayout(), qubits);
} else {
fail $"keysize {keysize} is not supported";
}
}
We can estimate this Q# program by submitting it to an Azure Quantum workspace using the azure_quantum Python package. To do so, we are setting up a connection to an Azure Quantum workspace (Learn how to create a workspace). You can find the values for resource_id and location in the Overview page of the Quantum workspace. (The complete code example is available on GitHub)
workspace = Workspace(
resource_id="",
location=""
)
estimator = MicrosoftEstimator(workspace)
We then define the input parameters for the job. In there we specify the key size, here 256. We use batching to submit multiple target parameter configurations at once. In here we specify the four configurations that correspond to the realistic and optimistic settings for both gatebased and Majorana qubits. For all configurations, we set the error budget to 0.333, i.e., we compute physical resource estimates considering a success rate about 67%.
params = estimator.make_params(num_items=4)
params.arguments["keysize"] = 256
# Error budget
params.error_budget = 0.333
# Gatebased (realistic)
params.items[0].qubit_params.name = QubitParams.GATE_NS_E3
# Gatebased (optimistic)
params.items[1].qubit_params.name = QubitParams.GATE_NS_E4
# Majorana (realistic)
params.items[2].qubit_params.name = QubitParams.MAJ_NS_E4
params.items[2].qec_scheme.name = QECScheme.FLOQUET_CODE
# Majorana (optimistic)
params.items[3].qubit_params.name = QubitParams.MAJ_NS_E6
params.items[3].qec_scheme.name = QECScheme.FLOQUET_CODE
Finally, we create a job by submitting the Q# operation together with the input parameters, and retrieve the results after it has completed. We then use the result object to create a summary table using the summary_data_frame
function. The table contains various entries, but in this example, we only print the numbers of physical qubits and physical runtimes, the same that are plotted in the experience on the Azure Quantum website.
job = estimator.submit(ECCEstimates, input_params=params)
results = job.get_results()
table = results.summary_data_frame(labels=[
"Gatebased (reasonable)",
"Gatebased (optimistic)",
"Majorana (reasonable)",
"Majorana (optimistic)"
])
print()
print(table[["Physical qubits", "Physical runtime"]])
The output is as follows:
Physical qubits Physical runtime
Gatebased (reasonable) 5.87M 21 hours
Gatebased (optimistic) 1.54M 11 hours
Majorana (reasonable) 3.69M 8 hours
Majorana (optimistic) 1.10M 4 hours
The estimates in the table are formatted for better readability. You can also retrieve the nonformatted values, e.g., the number of physical qubits and physical items for the first configuration (gatebased realistic) are access with results[0]["physicalCounts"]["physicalQubits"]
and results[0]["physicalCounts"]["runtime"]
, respectively.
RSA is one of the oldest, yet widely used, publickey cryptography approaches. The paper How to factor 2048 bit RSA integers in 8 hours using 20 million noisy qubits (arXiv:1905.09749) describes an implementation to factor RSA integers based on stateoftheart quantum operations for phase estimation and quantum arithmetic. The code is mostly similar to the code we are using for ECC estimates described above. However, we implemented the algorithm in Rust, and compiled it to LLVM. Therefore, we submit the QIR, which is the LLVM output, directly to the Azure Quantum Resource Estimator. (The complete code example is available on GitHub)
import urllib.request
bitcode = urllib.request.urlopen("https://aka.ms/RE/someniceuri").read()
The entry point in this implementation takes 4 input arguments, the actual product (in this sample the 2048bit RSA integer from the RSA challenge), a generator, and two parameters to control windowed arithmetic in the implementation. We take its values from the paper, in which 5 is suggested a good value for both of them. Then, we configure the qubit parameters and QEC scheme as above in the input parameters, and submit them together with the bitcode to the resource estimator.
params = estimator.make_params(num_items=4)
params.arguments["product"] = "25195908475657893494027183240048398571429282126204032027777137836043662020707595556264018525880784406918290641249515082189298559149176184502808489120072844992687392807287776735971418347270261896375014971824691165077613379859095700097330459748808428401797429100642458691817195118746121515172654632282216869987549182422433637259085141865462043576798423387184774447920739934236584823824281198163815010674810451660377306056201619676256133844143603833904414952634432190114657544454178424020924616515723350778707749817125772467962926386356373289912154831438167899885040445364023527381951378636564391212010397122822120720357"
params.arguments["generator"] = 7
params.arguments["exp_window_len"] = 5
params.arguments["mul_window_len"] = 5
# specify error budget, qubit parameter and QEC scheme assumptions
params.error_budget = 0.333
# ...
job = estimator.submit(bitcode, input_params=params)
results = job.get_results()
The code for evaluating the data is the same and returns the following table:
Physical qubits Physical runtime
Gatebased (reasonable) 25.17M 1 days
Gatebased (optimistic) 5.83M 12 hours
Majorana (reasonable) 13.40M 9 hours
Majorana (optimistic) 4.18M 5 hours
We can use the same program to compute resource estimates for other RSA integers, including the RSA challenge numbers RSA3072 and RSA4096, whose estimates are part of the cryptography experience on the Azure Quantum website.
The Advanced Encryption Standard (AES) is a symmetrickey algorithm and a standard for the US federal government. In order to obtain the physical resource estimates for breaking AES, we started from the logical estimates in Implementing Grover oracles for quantum key search on AES and LowMC (arXiv:1910.01700, Table 8), with updates on the qubit counts suggested in Quantum Analysis of AES (Cryptology ePrint Archive, Paper 2022/683, Table 7). In principle, we can follow the approach using the AccountForEstimates
function as we did for ECC. This operation and the logical counts in the Azure Quantum Resource Estimator are represented using 64bit integers for performance reasons, however, for the AES estimates we need 256bit integers. As a result we used an internal nonproduction version of the resource estimator that can handle this precision. Further details can be made available to researchers if you run into similar precision issues in your resource estimation projects.
The Azure Quantum Resource Estimator can be applied to estimate any quantum algorithm, not only cryptanalysis. Learn how to get started in Azure Quantum today with the Azure Quantum documentation. In there you find how to explore all the rich capabilities in various notebooks, with applications in quantum chemistry, quantum simulation, and arithmetic. You can learn how to submit your own quantum programs written in Q#, Qiskit, or directly provided as QIR, as well as how to set up advanced resource estimation experiments and apply customizations such as space/time tradeoffs.
The post Calculating resource estimates for cryptanalysis appeared first on Q# Blog.
]]>The post Azure Quantum Integrated Hybrid unlocks algorithmic primitives appeared first on Q# Blog.
]]>Second, they need to be kept stable which means that error correction will be needed to combat the fundamental noise processes that disrupt the quantum computer. To create such stability basically means to forge the underlying noisy physical qubits into more stable logical qubits and to use faulttolerant methods to implement operations. Microsoft’s unique topological qubit design has stability built in at the hardware level, and in turn will require less overhead to realize logical, faulttolerant computation with a quantum error correcting code. No matter the underlying qubit design, advanced classical computational power will be required to keep a quantum machine stable, along with the underlying quantum error correcting code.
Finally, a quantum supercomputer will necessarily be hybrid, both in its implementation but also in the solutions it runs. After all, all quantum algorithms require a combination of both quantum and classical compute to produce a solution. And it is in this careful design of the classical and quantum compute, together, where we will see future innovation and new types of solutions emerging. Hybrid quantum computing enables the seamless integration of quantum and classical compute together. This is an important part for achieving our path to quantum at scale and to integrate our quantum machine alongside supercomputing classical machines in the cloud.
Implementing hybrid quantum algorithms
Integrated Hybrid in Azure Quantum allows to mix classical and quantum code together already today. “This opens the door to a new generation of hybrid algorithms that can benefit from complex sidecomputations that happen while the quantum state of the processor stays coherent”, Natalie Brown, Senior Advanced Physicist at Quantinuum.
A visualization of the protocol is shown here:
The number of repetitions of the loop of the middle block depends on the measurement outcome and cannot be determined in advance, i.e., this program cannot be implemented as a static quantum circuit. Once the measurements of the 4 lower qubits indicate the result “0000”, the top most qubit is passed on as the output of the computation. In case any other syndrome is measured, the 5 qubits are reset and the procedure starts over.
What these two quantum algorithms both have in common is that they require complex control flow, including measurements that are applied during the computation while some part of the quantum computer remains coherent.
Experimental results
Recently, as shared in a paper posted on arxiv.org, a team of researchers from Microsoft and Quantinuum developed and ran MSD and RUS algorithms on the H1Series in Azure Quantum.
The programs for the applications were written in Q# and were then compiled to the Quantum Intermediate Representation (QIR), which is based on LLVM, a representation widely used in classical compilers. QIR allows to represent quantum and classical logic using function declarations, basic blocks, and control flow instructions. QIR also enables us to use existing LLVM tools and techniques to analyze and optimize the program logic (eliminating unnecessary instructions and reducing transport steps), such as constant folding, loop unrolling, and dead code elimination.
Quantinuum’s H1Series quantum computer leverages QIR in a powerful way: the Quantinuum quantum computer allows for hybrid classical/quantum programs to be executed. On the classical side, rich control flow is supported through integration with QIR including:
These primitive building blocks can be used to orchestrate computations such as MSD and RUS.
MSD protocol based on the [[5,1,3]] quantum errorcorrecting code
The left side of the following figure shows the expectation values for the actual run on Quantinuum H11 system, as well as the results of a simulation run of the H11 emulator (denoted as H11E). We plot the expectations with respect to three different Pauli frames X, Y, and Z which completely characterize the state of the qubits. The boxes indicate the ideal result which is only achievable for gates that are completely noiseless. The right side of the figure shows the probability of distillation succeeding at different limits, running both on the H11 system and the H11E emulator. The dashed black line indicates the probability of success expected for a perfect state preparation on a noiseless device.
Twostage RUS circuit
Researchers demonstrated the viability of this RUS protocol using QIR on Quantinuum’s QCCD simulator, which models realistic noise and errors in trapped ion systems, and by running it on the actual device. QIR was used to express four different versions of the RUS circuit, each using a different combination of recursion or loops, and Q# or OpenQASM as the source language.
As shown in the figure on the left above, the RUS protocol shows best performance when the Q# to QIR compiler is used and applied to a Q# implementation that realizes the RUS protocol as a for loop. As the iteration limit is increased, there is a clear drop in the performance for the recursion implementations, while the performance of loop implementations closely tracks the hand optimized OpenQASM 2.0++ code which is only achievable for gates that are completely noiseless.
A full Q# code sample that runs in Azure Quantum and that implements this hybrid program can be found at https://aka.ms/AQ/Samples/RUS.
In this blog post, we have shown how Q# can be used to implement and optimize faulttolerant protocols that use a hybrid approach of quantum and classical logic. We have presented two examples of such protocols, MSD and RUS circuits, and demonstrated their execution and performance through Azure Quantum on Quantinuum’s H1 series system that runs on an ion trap quantum chargecoupled device architecture platform. We have also shown how QIR can leverage the LLVM toolchain to enable interoperability and portability across different quantum hardware platforms.
#quantumcomputing #quantumcloud #azurequantum #quantinuum #QIR
The post Azure Quantum Integrated Hybrid unlocks algorithmic primitives appeared first on Q# Blog.
]]>The post Introducing the Azure Quantum Development Kit Preview appeared first on Q# Blog.
]]>The Azure Quantum team is excited to announce the initial preview of the new Azure Quantum Development Kit (or QDK for short). This has been entirely rebuilt using a new codebase on a new technology stack, and this blog post outlines the why, the how, and some of the benefits of doing so.
The “tl;dr” is that we rewrote it (mostly) in Rust which compiles to WebAssembly for VS Code or the web, and to native binaries for Python. It’s over 100x smaller, over 100x faster, much easier to install & use, works fully in the browser, and is much more productive & fun for the team to work on.
Give it a try via the instructions at https://github.com/microsoft/qsharp/wiki/Installation, and read on for the details…
The existing Quantum Development Kit has grown organically over several years, first shipping in late 2017. Being in a fastevolving space, it naturally evolved quickly too, incorporating many features and technologies along the way.
As we reflected on what we’d like the QDK to be going forward, it was clear some of the technologies and features would be a challenge to bring along, and that a rewrite might be the best solution. Some of our goals were:
Many quantum developers don’t come from a .NET background, being mostly familiar with Python. However, the existing QDK exposes much of the .NET ecosystem to developers, providing an additional learning curve. Some examples being the MSBuildbased project & build system and NuGet package management. When working with customers on issues, they will sometimes be confused when needing to edit .csproj files, run commands such as “dotnet clean”, or troubleshoot NuGet packages for their Q# projects.
Providing a delightful & simplified experience, from installation to learning to coding to troubleshooting to submitting jobs to quantum computers is our primary goal.
The existing QDK has some code and dependencies that are platform specific. While these were not problems initially, as platforms have evolved this has caused challenges. For example, Apple Silicon and Windows on ARM64 are not fully supported in the existing QDK. We also wanted the tools to run in the browser, such as in our new https://quantum.microsoft.com portal, or in a https://vscode.dev hosted editor.
With the runtime dependencies in the existing QDK, the full set of binaries that need to be installed has grown quite large. Besides the .NET runtime itself, there are some F# library dependencies in the parser, some C++ multithreading library dependencies in the simulator, some NuGet dependencies for the Q# project SDK, etc. In total, this can add up to over 180MB when installed locally after building a simple Q# project. Coordinating the download and initialization of the binaries, as well as the complexity of the interactions between them, can often lead to performance & reliability issues.
As the existing QDK had come to span multiple repositories, multiple build pipelines, multiple languages & runtimes (each often with their own set of dependencies), and multiple distribution channels, the speed at which we could check in a feature or produce a release has slowed, and a great deal of time is spent on codebase maintenance, security updates, and troubleshooting build issues. To provide a productive (and enjoyable) engineering system going forward, dramatic simplification was needed.
Around the end of 2022 we set about prototyping some ideas, which grew into the new QDK we are releasing in preview today. The basic philosophy behind engineering the new QDK is as follows:
By writing as much as possible in Rust, we have a codebase that can easily target native binaries for any platform supported by the Rust compiler (which we build into our Python wheels) and build for WebAssembly (via wasmbindgen) to run in the browser. With a focused codebase, the resulting binaries are very small & fast too.
There is a cost to every dependency you take. The cost to learn it, the cost to install it (i.e., build times and disk space), the cost to update & maintain it (i.e., as security issues are reported), the cost to final product size, and so on. Sometimes these costs are worth paying for what you get in return, but the taxes accumulate over time. We are very mindful and minimal in the dependencies we take.
For our new codebase, we have limited the languages used to:
For those three languages, we keep dependencies to a minimum, nearly all of which can be seen in the Cargo.toml and package.json files at the root of the repo.
The below highlevel diagram shows roughly how this all fits together in our VS Code extension, Python packages, and for general web site integration.
Setting up a build environment for developers (or CI agents) should be fast. For the new codebase, currently you just install Rust, Python, and Node.js, clone one repo, and run one Python build script.
Developing the product should be fast. When working on the core compiler Rust code, the development innerloop is often as fast as clicking ‘run’ on a unit test in VS Code via the excellent “rustanalyzer” extension. When working on the TypeScript code for the VS Code extension, with “esbuild” running in watchmode it’s as quick as saving the changes and pressing F5 to launch the Extension Development Host.
The build infrastructure should be easy to keep working. Our CI and build pipeline use the same ‘build.py’ script in the root of the repo that developers use locally to build & test.
Last but certainly not least, is to avoid the extraneous. Every feature added should have a clear need and add significant value. This provides for a more streamlined & intuitive product for the customer, and a less complex codebase to do further development in.
We’re pretty proud of the result. It’s no exaggeration to say the new Azure Quantum Development Kit is 100x smaller, 100x faster, available on Windows, Mac, Linux, and the web, and is a greatly simplified user experience.
As outlined above, the existing QDK results in over 180MB of binaries locally once a project is fully built and all dependencies installed. The VSIX package for our new VS Code extension is currently around 700KB and includes everything needed for Q# development in VS Code. (If you ‘pip install’ our Python packages to work with Q# via Python, that’s around another 1.3MB). Installation typically takes a couple of seconds with no other dependencies. If you have VS Code, (and Python/Jupyter if desired), you’re ready to install.
We have examples of programs that would take minutes to compile in the existing QDK. Those same programs are now measured in milliseconds in the new QDK. The language service is so fast, most operations are done on every keystroke and feel instant. The simulator can run 1000s of ‘shots’ per second for many common algorithms on a good laptop.
The build pipelines for the existing QDK take between 2 – 3 hours to complete, are fragile, and issues often require coordinated checkins across multiple repos. For the new QDK, all code is in one repo, and we build, test, and push live to our online playground in around 10 mins on every commit to main. Our publishing pipeline uses largely the same script.
We’ve built an extremely fast & reliable installation, language service, compiler, and debugger. Oh, and it all works inside the browser too!
A couple of years ago VS Code introduced VS Code for the Web (https://code.visualstudio.com/docs/editor/vscodeweb), with the ability to run the IDE in a browser with no local install, such as at https://vscode.dev or by pressing “.” when in a GitHub repo. By building our extension entirely as a web extension ALL our features run equally well in VS Code desktop or in the browser.
By way of example, the below screenshot shows loading the editor in the browser by visiting https://vscode.dev, running a Q# file under the debugger, viewing the quantum simulator output in the Debug Console, while also signed in to an Azure Quantum Workspace shown in the Explorer sidebar (to which the current program could be submitted) – all without anything needing to be installed on the local machine.
We think the improvements in the user experience for the new QDK really are a quantum leap (bad pun intended).
This is an early preview, and we still have several features to add before we get to our ‘stable’ release, some of the main ones being:
Once the core product is solid, we have a laundry list of further features and Q# language improvements we want to get to, which you can view and contribute to on our GitHub repo.
The existing QDK (https://learn.microsoft.com/enus/azure/quantum/installoverviewqdk) is still fully supported and should be used if the new QDK Preview doesn’t meet your needs or is changing too frequently as we iterate towards our stable release.
We’d love for you to give it a try and give us your feedback. The installation guide and other getting started documentation is currently on our GitHub wiki at https://github.com/microsoft/qsharp/wiki/Installation. You can report any issues, weigh in on feature requests, or contribute code on that same GitHub repo.
The post Introducing the Azure Quantum Development Kit Preview appeared first on Q# Blog.
]]>The post Modeling quantum architecture with Azure Quantum Resource Estimator appeared first on Q# Blog.
]]>There are numerous architectural decisions to consider when building quantum computers, which have the potential to address realworld computational challenges like quantum chemistry and quantum cryptography. Researchers worldwide are engaged in developing various aspects of quantum computer architecture. Microsoft Azure Quantum Resource Estimator plays a pivotal role in assessing how different combinations of design choices might impact the performance of upcoming quantum computers.
Azure Quantum Resource Estimator was designed to assist researchers in estimating computational time and the requisite number of qubits based on diverse assumptions regarding hardware quality and error correction strategies. We have used a more powerful version of the Resource Estimator for many years as an internal tool for analyzing architectural decisions in our own quantum program. We have incorporated new options to offer similar capabilities to the Azure Quantum users.
Continuing our commitment to enhancing the tool’s capabilities, we have recently introduced several new features. These updates include the ability to customize error budget distributions and implement custom distillation units.
In this article, we provide an overview of fundamental concepts related to the architecture of quantum computers, exploring their influence on the necessary resources and the capabilities offered by Microsoft Azure Quantum Resource Estimator to model these intricate structures.
One can represent quantum computing stack as follows:
Scientists all around the world are collaborating on refining individual stack components and integrating them cohesively. The multitude of decisions made at each stack layer, coupled with diverse quantum algorithms, results in a vast array of possible combinations that warrant evaluation and comparison. Microsoft Azure Quantum Resource Estimator helps to assess and compare those combinations efficiently. You can submit your quantum program (such as in Q# or Qiskit) or Quantum Intermediate Representation (QIR) while specifying particular characteristics of a proposed quantum computing stack:
And the Resource Estimator will calculate rQOPS of the architecture and, qubits and time required for the application given this combination.
In the current landscape of 2023, a significant challenge lies in the pursuit of rapidly responsive, stable, and scalable physical qubits to enable impactful applications in chemistry and materials science (read more at Communications of the ACM, 2023). Researchers are exploring a range of design possibilities, including instruction sets as well as diverse anticipated levels of speed and fidelity for these qubits.
In the process of resource estimation, a higher degree of specificity becomes necessary, encompassing various times for distinct actions involving qubits. Our recent updates here involve:
Azure Quantum Resource Estimator supports:
Here is an example of specifying a custom qubit type:
from azure.quantum import Workspace from azure.quantum.target.microsoft import MicrosoftEstimator from azure.quantum.target.microsoft.target import MeasurementErrorRate #Enter your Azure Quantum workspace details here workspace = Workspace( resource_id="", location="" ) estimator = MicrosoftEstimator(workspace) params = estimator.make_params() params.qubit_params.name = "qubit_maj_ns_e6" params.qubit_params.instruction_set = "Majorana" params.qubit_params.one_qubit_measurement_time ="150 ns" params.qubit_params.two_qubit_joint_measurement_time = "200 ns" params.qubit_params.t_gate_time = "100 ns" params.qubit_params.one_qubit_measurement_error_rate = MeasurementErrorRate(process=1e6, readout=2e6) params.qubit_params.two_qubit_joint_measurement_error_rate = 1e6
# test quantum program from qiskit import QuantumCircuit circ = QuantumCircuit(3) circ.crx(0.2, 0, 1) circ.ccx(0, 1, 2) job = estimator.submit(circ) job.wait_until_completed() result = job.get_results() print(result)
We use a simple Qiskit circuit for this blog post for short. To learn how to run resource estimator for Q# algorithms, go to the Create the quantum algorithm section at the resource estimator documentation.
You can learn more about this at the Physical qubit parameters section of documentation.
Error correction in classical computing involves creating duplicates or checksums of data and periodically verifying these duplicates. In quantum computing, error correction is more complex due to the impossibility of copying information (see Nocloning theorem).
However, the basic principles are similar to classical error correction: achieving greater accuracy in computations requires extra resources (additional qubits and time). Analogous to classical computing, the extent of extra information can be quantified using a code distance (see Hamming distance). Opting for a higher code distance necessitates more resources but leads to enhanced computation fidelity.
We use the following exponential model for error rate suppression:
Here, p – is the physical qubit error rate (computed from various physical error rates above), P – is the (output) logical error rate provided by an error correction scheme, d – is the code distance, and a and p – are coefficients specific for the scheme, called crossing prefactor and error correction threshold correspondingly. As you can see, if p < p*, then P increases when d increases.
As mentioned above, resources involved would also grow with increasing of d. Here are examples for the Floquet scheme (arxiv.org:2202.11829):
logicalCycleTime=3 * oneQubitMeasurementTime * d,
physcialQubitsPerLogicalQubit=4 * d^2 + 8 * (d 1).
Microsoft Azure Quantum Resource Estimator supports two predefined schemes of error correction: surface and Floquet, as well as custom error correction schemes.
The surface scheme is based on the premise that physical qubits form a lattice on a surface. In this arrangement, we have two types of qubits: data qubits, which play a role in primary algorithm computations, and measurement qubits, which serve as supplementary components. These qubits are organized in a checkerboard pattern, where each data qubit is surrounded by four measurement qubits, and vice versa. Boundary qubits are conceptually linked to qubits on the opposite side, creating a toroidal structure. Stabilization measurements are performed on corresponding qubits along different axes with a specific geometric pattern. This scheme exhibits versatility, as it can be applied to both gatebased qubits and Majorana qubits.
Conversely, the Floquet scheme places more stringent demands on the geometric arrangement of qubits but offers significant advantages in terms of time and space efficiency (as detailed in arxiv.org: 2202.11829). Qubits must be arranged in a grid with three neighbors each, and it should be possible to color the plaquettes with just three colors. Honeycomb is an example of such structure. Subsequently, stabilization measurements are executed periodically with a period of three, involving joint measurements between one qubit and one of its three neighbors at a time. This geometric structure aligns well with Majorana qubits and can bring substantial benefits when applied to them.
If using Microsoft Quantum Development Kit, one can submit the following job with different error correction schemes:
from azure.quantum import Workspace from azure.quantum.target.microsoft import MicrosoftEstimator, ErrorBudgetPartition # Enter your Azure Quantum workspace details here workspace = Workspace( resource_id="", location="" ) estimator = MicrosoftEstimator(workspace) params = estimator.make_params() params.qec_scheme.error_correction_threshold = 0.01 params.qec_scheme.crossing_prefactor = 0.07 params.qec_scheme.logical_cycle_time = "3 * oneQubitMeasurementTime * codeDistance" params.qec_scheme.physical_qubits_per_logical_qubit = "4 * codeDistance * codeDistance + 8 * (codeDistance  1)"
# there are two predefined schemes: floquet_code and surface_code # the floquet_code can be applied for models with Majorana instruction set # the surface code can be applied to both: Majorana and gatebased instruction sets. # you can just specify the name of a predefined scheme #params.name = "surface_code" # test quantum program from qiskit import QuantumCircuit circ = QuantumCircuit(3) circ.crx(0.2, 0, 1) circ.ccx(0, 1, 2) job = estimator.submit(circ) job.wait_until_completed() result = job.get_results() print(result)
See more at the Quantum error correction schemes section of the Resource Estimator documentation.
To harness the benefits of quantum computing over classical counterparts, it is essential to devise quantum gates that cannot be effectively simulated on traditional nonquantum hardware. These operations can be envisioned as rotations of the Bloch sphere, executed at arbitrary angles. In this context, classical nonquantum hardware is proficient at executing rotations limited to angles of 90 degrees. This particular set of operations can be encapsulated by the concept of Clifford gates.
By combining a set of Clifford gates with additional nonClifford gates, it is possible to create a universal set of quantum gates. This means that any quantum gate can be efficiently approximated to the desired precision using a predefined sequence of operations from this set. This approximation process is commonly referred to as rotation synthesis.
Various quantum computer architectures might incorporate distinct nonClifford gates within the rotation synthesis. These particular gates are often termed “magic gates,” and their associated states are labeled as “magic states.” A notable challenge in quantum computing stems from its inherent inability to duplicate data. Consequently, generating these magic states once and employing them indefinitely is unfeasible. Instead, each usage demands the creation of fresh instances of these magic states. This intricate procedure of generating magic states with a specified level of precision is known as magic state distillation.
One of popular choices for the magic state is the Tgate defined as follows:
In Azure Quantum Resource Estimator, we assume that the Tstate is used as the magic state.
Algorithms designed for Tstate distillation play a crucial role in enhancing qubit accuracy. These algorithms utilize multiple input qubits with low accuracy to generate an output qubit with higher accuracy. This process can encompass multiple rounds of distillation, progressively refining qubit quality until the desired standard is attained. Each round employs a specific algorithm known as a distillation unit.
For each distillation unit, one should specify how it improves the qubit quality and what resources it will consume for the runtime: qubits involved, and time spent. Those characteristics depend on the code distance used for the distillation, physical quality of qubits and accuracy provided originally or by the previous round of distillation.
It’s important to note that distinct sequences of distillations can yield greater efficiency for different qubit qualities (output error rates). In other words, depending on the initial error rate of an input qubit, specific sequences of distillation may be more adept at achieving the required error rate while utilizing fewer resources.
Here are examples of distillation units described in Assessing requirements to scale to practical quantum advantage:
Distillation unit 
# input Ts 
# output Ts 
acceptance probability 
# qubits 
time 
output error rate 
15to1 spaceeff. physical 
15 
1 
1−15p_{T}−356p 
12 
46t_{meas} 
35p_{T}^{3}+7.1p 
15to1 spaceeff. logical 
15 
1 
1−15P_{T}−356P 
20n(d) 
13τ(d) 
35P_{T}^{3}+7.1P 
15to1 RM prep. physical 
15 
1 
1−15p_{T}−356p 
31 
23t_{meas} 
35p_{T}^{3}+7.1p 
15to1 RM prep. logical 
15 
1 
1−15P_{T}−356P 
31n(d) 
13τ(d) 
35P_{T}^{3}+7.1P 
When we are estimating the performance of a particular quantum algorithm on a specific quantum computer (which has a certain qubit quality and operation speed), we might need different target output error rates. Considering two distillation units (15to1 spaceefficiency and 15to1 RM preparation) and various code distances for each round, there could potentially be thousands of combinations to evaluate.
Various research groups are actively working on developing distillation algorithms, and their approaches might differ from the ones described earlier. For instance, some algorithms could generate multiple output Tstates, potentially reducing costs by sharing resources or enabling parallelization. With dozens of distillation algorithms in consideration, an intriguing opportunity arises to compare them against each other, encompassing diverse physical qubit attributes and algorithm variations. This comparative analysis could provide valuable insights into their relative effectiveness.
Microsoft Quantum invites researchers to assess the resources needed for their unique distillation approaches. Presently, we offer support for two established distillation units named `151 RM` and `151 spaceefficient`, in addition to the flexibility to define custom distillation unit specifications.
Here is an example of calling the Resource Estimator with custom distillation unit specifications:
from azure.quantum import Workspace
from azure.quantum.target.microsoft import MicrosoftEstimator
from azure.quantum.target.microsoft.target import DistillationUnitSpecification, ProtocolSpecificDistillationUnitSpecification
# Enter your Azure Quantum workspace details here
workspace = Workspace(
resource_id="",
location=""
)
estimator = MicrosoftEstimator(workspace)
params = estimator.make_params()
specification1 = DistillationUnitSpecification()
specification1.display_name = "282"
specification1.num_input_ts = 2
specification1.num_output_ts = 28
specification1.output_error_rate_formula = "35.0 * inputErrorRate ^ 3 + 7.1 * cliffordErrorRate"
specification1.failure_probability_formula = "15.0 * inputErrorRate + 356.0 * cliffordErrorRate"
physical_qubit_specification = ProtocolSpecificDistillationUnitSpecification()
physical_qubit_specification.num_unit_qubits = 12
physical_qubit_specification.duration_in_qubit_cycle_time = 65
specification1.physical_qubit_specification = physical_qubit_specification
logical_qubit_specification = ProtocolSpecificDistillationUnitSpecification()
logical_qubit_specification.num_unit_qubits = 20
logical_qubit_specification.duration_in_qubit_cycle_time = 37
specification1.logical_qubit_specification = physical_qubit_specification
specification2 = DistillationUnitSpecification()
specification2.name = "151 RM"
specification3 = DistillationUnitSpecification()
specification3.name = "151 spaceefficient"
params.distillation_unit_specifications =[specification1, specification2, specification3]
# test quantum program
from qiskit import QuantumCircuit
circ = QuantumCircuit(3)
circ.crx(0.2, 0, 1)
circ.ccx(0, 1, 2)
job = estimator.submit(circ)
job.wait_until_completed()
result = job.get_results()
print(result)
For further information, please refer to the Distillation Units section in the Resource Estimator documentation.
Quantum algorithms inherently embrace a probabilistic aspect, and the execution process is susceptible to errors. Typically, an algorithm can be run multiple times to reveal its probabilistic behavior and mitigate errors. When sending an algorithm for quantum computer execution, it’s crucial to determine how many runs are required to achieve a desired confidence level. Consequently, a probability of error in the final computations is defined, termed the error budget. Seeking a higher probability of success can involve employing the error correction methods elucidated earlier. However, it is important to note that this pursuit of higher success probability comes at an elevated cost, involving more qubits and a longer runtime.
We could categorize errors into three categories by occurrence:
As an initial approximation, we might distribute the error budget evenly among these three sources. Yet, certain algorithms could demand varying amounts of rotations or magic states. Hence, there could be a preference to readjust the error budget to align with the intricacies of the algorithm in question.
If using Microsoft Quantum Development Kit, one can submit the following job with different error budget options:
from azure.quantum import Workspace
from azure.quantum.target.microsoft import MicrosoftEstimator, ErrorBudgetPartition
# Enter your Azure Quantum workspace details here
workspace = Workspace(
resource_id="",
location=""
)
estimator = MicrosoftEstimator(workspace)
params = estimator.make_params()
# make an estimate with specific error budget
# for each of the three error sources:
params.error_budget = ErrorBudgetPartition(logical=0.001, t_states=0.002, rotations=0.003)
# for uniformly distributed error budgets,
# # you can specify just the total as single number:
# params.error_budget = 0.001
# test quantum program
from qiskit import QuantumCircuit
circ = QuantumCircuit(3)
circ.crx(0.2, 0, 1)
circ.ccx(0, 1, 2)
job = estimator.submit(circ)
job.wait_until_comps()
print(result)leted()
result = job.get_result
Read more about error budgets in the documentation.
The Azure Quantum team is dedicated to ongoing enhancements of the Resource Estimator. This valuable tool serves both our internal teams and external researchers in the endeavor to design quantum computers. Expanding the scope of modeling capabilities remains a key priority for us. We eagerly welcome your feedback on the specific custom options you require for estimating your quantum computer resources. Your insights will greatly contribute to refining our tool and making it even more effective for the quantum community.
There are many ways to learn more:
The post Modeling quantum architecture with Azure Quantum Resource Estimator appeared first on Q# Blog.
]]>The post Mentoring capstone projects at the University of Washington appeared first on Q# Blog.
]]>This spring we had the opportunity to mentor two student teams as part of the University of Washington’s NSF Research Traineeship program Accelerating QuantumEnabled Technologies (AQET). This yearlong certificate program offers graduate students training in quantum information science and engineering and includes several courses on different areas of quantum technologies followed by the culminating team project within the UW EE522: Quantum Information Practicum class. For this course the students worked on a quantumrelated project under the guidance of mentors from the quantum industry—and that’s where we came in.
We worked on two projects focused on the tools necessary to implement quantum algorithms at scale. As quantum computers evolve from their current noisy intermediate scale quantum (NISQ) era to scalable quantum supercomputers, the programs that run on them will evolve as well, from simple circuits to complex programs that solve sophisticated problems. As part of this progress, we start exploring the practicality of implementing various algorithms to run on quantum computers and the resources required to execute them. We also look at the possibility of generating parts of these programs automatically, borrowing from our experience in classical computing. The students’ projects explored different areas of quantum software development using Microsoft’s Quantum Development Kit (QDK) and Azure Quantum Resource Estimator.
Both teams did a great job, and later this fall they will be presenting their work at IEEE Quantum Week 2023 on Wednesday, September 20. Here is a teaser of their work.
Students: Chaman Gupta, ITung Chen
Mentors: Mathias Soeken, Mariia Mykhailova
The goal of this project was to design a workflow that would convert classical computation description into Q# code that implements it as a quantum computation. For example, the following Q# code for classical computation
internal function Multiplication2(a : Int, b : Int) : Int {
return a * b;
}
would be automatically converted into a quantum circuit that can be used like an operation implemented by the Q# library operation, e.g., MultiplyI.
The QDK samples already had an example of doing this for Boolean function evaluation, so this project targeted integers and arithmetic functions that work with integers: addition, multiplication, and modulo operations.
The image below shows the workflow used in the project.
Automated oracle synthesis workflow as implemented in the project
The steps in the workflow are as follows.
The generated quantum code can be executed via any tool that accepts QIR programs as input. This project used QIR Runner to run the simulation of the generated code for small inputs, and Azure Quantum Resource Estimator to estimate the resources required to run the code for larger inputs. Code samples and more technical details of this work can be found here.
When compared with handcrafted Q# library operations implementing similar quantum computations, our automatically generated code was faster but required more qubits to run. This shows that automatic generation of quantum code is a promising avenue of producing reliable and performant code in an efficient manner. The next steps in this direction would be exploring the ways to optimize the generated code even further and adding support for more arithmetic types and operations, such as floatingpoint arithmetic.
Students: Ethan Hansen, Sanskriti Joshi, Hannah Rarick
Mentors: Wim van Dam, Mariia Mykhailova
In this project, the students explored quantum multiplication algorithms and compared their efficiency in terms of runtime, qubit numbers, and Tgates required to run them on large inputs.
The project built on prior work by Gidney and it compared three quantum implementations of algorithms for multiplying nbit integers:
Classical multiplication is an arithmetic operation that is often taken for granted when discussing algorithms. However, when implemented on a quantum computer, it incurs quite a lot of overhead in terms of both additional qubits and extra operations performed to implement the computation as a unitary transformation that preserves coherence.
Using Azure Quantum Resource Estimator, the team calculated how the required resources depend on the multiplication algorithm used and on the size of the integers. The estimates showed that windowed multiplication uses slightly more qubits compared to the schoolbook algorithm but that it runs faster. Karatsuba’s algorithm uses more qubits and has longer runtimes compared to the schoolbook algorithm for inputs up to several thousand bits long. Eventually though, for large enough input sizes, the runtime of Karatsuba’s algorithm caught up with that of the schoolbook algorithm. Put together, these results refine our understanding of the performance of these three algorithms, and how in different settings different algorithms should be preferred.
This project provides a framework of applying similar resource estimation techniques to comparing different implementations of other quantum arithmetic primitives, such as floatingpoint functions, and other quantum subroutines.
The students found their projects interesting and enjoyable. They mentioned that the work on capstone projects helped them discover and learn important topics in quantum computing, such as oracle synthesis and resource estimation of algorithms, and broaden their understanding of the current state of the field. The students got valuable insights into their projects using Azure Quantum Resource Estimator, and their feedback on their experience helped us improve the tool for the future users. It has been a pleasure to mentor these teams, and we are looking forward to next year’s capstone projects!
The post Mentoring capstone projects at the University of Washington appeared first on Q# Blog.
]]>