The post Evaluating cat qubits for faulttolerant quantum computing using Azure Quantum Resource Estimator appeared first on Q# Blog.
]]>This blog post highlights a recent collaboration between Microsoft and Alice & Bob, a French startup whose goal is to build a faulttolerant quantum computer by leveraging a superconducting qubit called a cat qubit. In this collaboration, Alice & Bob uses the new extensibility mechanisms of Microsoft’s Resource Estimator to obtain resource estimates for their cat qubit architecture.
The Resource Estimator is a tool that can help evaluate the practical benefit of quantum algorithms. It calculates an estimate for the expected runtime and the number of physical qubits needed to run a given program under different settings of the target faulttolerant quantum computer. The default settings of the resource estimator represent generic gatebased and Majoranabased qubits, unbiased planar quantum error correction codes (i.e., 2D layout for logical qubits assuming the same error rates for bit flip and phase flip errors) that support lattice surgery, and T factories that use multiple rounds of distillation (please refer to this paper for more details on these assumptions). These settings cover many quantum computing architectures, but they do not have complete flexibility for quantum architects to model various other important system architectures with different assumptions.
Microsoft is happy to announce that the Resource Estimator, which was made open source in January 2024, now has an extensibility API to model any quantum architecture and to modify all assumptions. To show how this extensibility API works, Microsoft and Alice & Bob demonstrate how it is used to model Alice & Bob’s cat qubit architecture, along with a biased repetition code, and Toffoli factories. The opensource example performs the resource estimation for elliptic curve cryptography described in Alice & Bob’s Physical Review Letters paper from July 2023.
Cat qubits have special error correction requirements because they exhibit a biased noise: they have several orders of magnitude less bit flips than phase flips. They use engineered two photon dissipation to stabilize two coherent states of the same amplitude and opposite phase, used as the 0 and 1 of the qubits. The Alice & Bob roadmap takes advantage of this asymmetry to simplify the error correction strategy. To achieve this however, the usual hierarchy of gates used in quantum computing has to be modified. As a first step, we need to build a gate set that protects this noisebiasing property. And then, from this set, they have to offer a universal set of faulttolerant operations (note that the biaspreserving gate set is typically not universal, but sufficient to implement a universal gate set at the logical level). This work is carried in the article Repetition Cat Qubits for FaultTolerant Quantum Computation and summarized in the figure below.
Alice & Bob’s architecture highlights the importance of extensibility in the Resource Estimator and the ability to override the predefined settings. The typical error correction code, used by the Resource Estimator, is the surface code, but cat qubits require a repetition code. The Resource Estimator assumes a “Clifford+T” universal gate set, while the gate set presented above for cat qubits is “Clifford+Toffoli.”
The resource estimator, which is written in Rust, can be extended by using a Rust API. The main function of the resource estimator is to calculate the physical resource estimates for a logical overhead with respect to an error correction protocol, a physical qubit, and a factory builder. The interaction of these components is illustrated in the architecture diagram above. Each of these components are interfaces that can be implemented, which allows full flexibility. For instance, the resource estimator doesn’t have to know about the input program, or even the layout method. It only needs the logical overhead, which gives the number of logical qubits, the logical depth, and the number of needed magic states. Likewise, the implementations of the other interfaces provide information for the resource estimation. We will explain some aspects of the implementation in the remainder of this section but please refer to the example source code in GitHub for more details.
The error correction protocol in the Resource Estimator defines both the physical qubit and the code parameter that it uses. For most codes, the code parameter is the code distance, and finding a value for the code distance that ensures a desired logical error rate given a physical qubit is one of the main goals of the error correction protocol. The Alice & Bob architecture uses a repetition code with two parameters: distance and the average number of photons. The distance deals with the phase flip error and the number of photons must be high enough to avoid bit flip errors, so that the repetition code can focus on correcting only the phase flip errors.
A factory builder’s job is to make magic state factories that produce magic states with a certain maximum output error probability. The factories can be either precomputed or they can be calculated as needed, when a new request is made. Also, they can use the error correction protocol and select their own code parameters to make the factories. For Alice & Bob’s architecture, the magic state that is produced is CCX and there’s a precomputed list of Toffoli factories available (see also Table 3 in the paper).
We make two main assumptions about the input program: that it uses mostly CX (or CNOT) and CCX (or Toffoli) gates, and that they aren’t run in parallel, but each have their own cycle time (i.e., the number of needed error correction syndrome extraction cycles). With these assumptions, and the number of logical algorithm qubits before taking into account the layout, we can easily calculate the layout overhead as a function of the number of logical qubits and the number of CX and CCX gates. Moreover, the paper from Alice & Bob gives formulas to find values for these three metrics for the elliptic curve cryptography algorithm, and so the layout overhead can be generated as a function of the key size and some implementation details (such as the window size for windowed arithmetic). Moreover, we use the Azure Quantum Development Kit (QDK) to compute a logical overhead by evaluating a Q# program.
The above graph compares the spacetime tradeoff of resource estimates using the resource estimator and the estimates from the paper. The paper reported a quicker solution that needed more qubits, while the resource estimator produced estimates with fewer qubits and a longer, but feasible, runtime. Note that the resource estimator does not automatically explore application specific parameters (such as window sizes for windowed arithmetic).
You can try out and execute the Alice & Bob resource estimation example that uses Microsoft’s Resource Estimator. As it is open source, you can easily change the application input. The cost model that relies on CX and CCX gates is compatible with many logical resource estimation research papers in the literature, and therefore results from those papers can be quickly converted into physical resource estimates. Further, you can examine various Q# programs that are available in the Q# GitHub repository. We hope that the resource estimator gives you useful insights and helps your research; and we would welcome your feedback.
The post Evaluating cat qubits for faulttolerant quantum computing using Azure Quantum Resource Estimator appeared first on Q# Blog.
]]>The post Circuit Diagrams with Q# appeared first on Q# Blog.
]]>I’m a software engineer in the Azure Quantum Development Kit team, and I’m very excited to share a new feature I’ve been working on: circuit visualization in Q#.
One of the neat things about Q# is that it gives you the ability to express quantum algorithms in a procedural language that’s reminiscent of classical programming languages such as C and Python. If you’re already a programmer, this way of thinking will be very intuitive to you, and you can get started with quantum computing right away (if you haven’t done so yet, check out our quantum katas).
However, this isn’t how many people learn about quantum computing today. If you flip through any quantum computing textbook, you’ll see that it’s conventional to think in terms of quantum circuits.
We wanted to bridge the gap between these two different modes of thinking.
If you open any Q# program in VS Code, you’ll notice a little “Circuit” CodeLens above the entry point declaration. When you click on that, your Q# program will be represented as a quantum circuit diagram.
Being able to go from Q# code to circuit diagrams means that you can use familiar constructs such as for
loops and if
statements in your program to manipulate the quantum state while being able to view the logical circuit at any time to get a highlevel view of your algorithm.
How does this work? The quantum circuit for a Q# program is generated by executing all the classical parts of the program while keeping track of when qubits are allocated and which quantum gates are applied. This data is then displayed as a quantum circuit.
Not all quantum programs can be represented as straightforward quantum circuits. What if we have a dynamic (commonly known as “adaptive”) circuit? Say we have a while
loop in our program that compares measurement results and takes an action that depends on the result. The exact set of gates in the program will not be deterministic anymore.
That’s when we need to run the program through the quantum simulator. This is called “trace” mode since we’re tracing the quantum operations as they are actually performed in the simulator. When the circuit visualizer detects that the program contains measurement comparisons, this mode is activated.
Depending on your luck, you may end up with two gates, or you may end up with many more!
Each time you generate the circuit, you may see a different outcome in the circuit diagram.
It would certainly be nice to visualize all the outcomes at once, and we’re working through some ideas on how to do that. Simple conditionals can be represented as gates controlled by classical wires. But given a language as expressive as Q#, you can write complex conditionals that are difficult to visualize on a single 2D circuit diagram. How would you represent an adaptive circuit such as the one above? We’d love to hear your ideas. You can leave a comment here or on this GitHub issue.
Working on this feature sparked a lot of lively debate within the team, especially during the design stage. We’re a team with diverse technical backgrounds. Some of us found it very intuitive to think in terms of circuits. Others preferred reading code and thought circuit diagrams were very limiting. Did we even need the feature at all?
I now realize it’s not eitheror: it’s very powerful to be able to do both. Even if you prefer one paradigm over the other, being able to inspect your code through different lenses really deepens your understanding of the problem you’re working on. You can run simulations and look at a histogram of the results. You can step through the code using the Q# debugger. And now you can view it as a circuit diagram. Each different view into the problem offers a different insight.
This is also why testing this feature was so fun for me. I’m far from an expert in quantum computing; some of our Q# samples are admittedly still confusing to me. As I ran the circuit visualizer on each sample, giving it a final lookover, I found the process unexpectedly satisfying. I felt like I was finally starting to understand what these algorithms were doing. I’m happy for this new addition to my learning toolkit!
If you’d like to try out Q# circuit diagrams for yourself, head over to The Azure Quantum Playground and give it a try now – no installation necessary. When you’re ready to work on your own Q# projects, install the Azure Quantum Development Kit VS Code Extension. If you prefer working in Python, head over to the documentation for instructions on how to get started in Jupyter Notebooks. Let us know what you think!
The post Circuit Diagrams with Q# appeared first on Q# Blog.
]]>The post Exploring spacetime tradeoffs with Azure Quantum Resource Estimator appeared first on Q# Blog.
]]>We are delighted to present a new experience for exploring spacetime tradeoffs recently added to the Azure Quantum Resource Estimator. Available both as Azure Quantum Development Kit (VS Code extension) and a Python package, it adds a new dimension to estimates.
Resource estimation doesn’t just yield a single group of numbers (one per objective), but rather multiple points representing tradeoffs between objectives, such as qubit number and runtime. Our recent update of the Azure Quantum Resource Estimator adds methods for finding such tradeoffs for a given quantum algorithm and a given quantum computing stack. We also provide a visual experience to navigate alternatives with an interactive chart and supplementary reports and diagrams:
This chart illustrates tradeoffs between qubit numbers and runtimes required for running the same algorithm across multiple projected quantum computers. See estimationfrontierwidgets.ipynb for steps to learn how to generate this diagram.
More specifically, we have considered the simulation of the dynamics of a quantum magnet, the socalled Ising model on a square 10×10 lattice. This is the simplest model for ferromagnetism in a quantum system, and the algorithm simulates its evolution over time. At this system size the problem cannot be simulated on classic computers in reasonable time and solutions with computers would be highly desired.
The diagram above and the table
show that this algorithm requires 230 logical qubits with low error rates. Such logical qubits don’t exist yet, and it will require hundreds of noisy physical qubits per each logical. So, the total number of high error rate physical qubits requiring for the simulation ranges from 33,000 to 261,340.
You also can notice on the chart that increasing the number of utilized physical qubits by 1035 times reduces the runtime 120250 times. A thoughtful analysis of tradeoffs for entire algorithms and for subroutines can save a lot of runtime if extra qubit resources are available.
Compromises between the number of physical qubits and the runtime in quantum computations are like ones between space and time utilization in classic computing. As we have done above, for a given algorithm, one can start estimates by computing the minimal number of physical qubits required for its execution on a given quantum stack, and then deduce the corresponding runtime. If more physical qubits are available, one can accelerate the runtime by parallelizing execution of the algorithm or its subroutines.
One can build multiple estimates by allowing more and more physical qubits and improving the runtime. We can consider efficient estimates, such that in each pair of estimates, one would be better than the other with respect to runtime, and the other would be better with respect to the number of physical qubits. The set of such estimates forms the socalled Pareto frontier which is represented by monotonous decreasing plots on the spacetime diagram.
Just as in classical programs, there are many opportunities for spacetime tradeoffs in the choice of quantum algorithms and their implementation. Here we want to discuss another, quantumspecific opportunity. Rotation gates that rotate logical qubits by arbitrary angles require socalled magic states which are generated in a process known as Magic State Distillation happening in a set of qubits called the “magic state factories”. Many quantum stacks use the T gate as the only magic gate, and corresponding states and factories become Tstates and Tfactories, and we use those names in the Resource Estimator.
Tstate generation subroutines are executed in parallel with the main algorithm. Let us start with a single Tfactory. For some algorithms, it could produce enough Tstates corresponding for the algorithm consumption. For other algorithms requiring more Tstates, the algorithm execution will be slowed down, waiting for the next Tstate to be produced. Note that idling of an algorithm is not free in the quantum world because errors in quantum states will occur while waiting. Longer runtimes might thus require a higher error correction code distance and with it more physical qubits and longer runtimes than might naively be estimated.
If an algorithm waits for new Tstates and there are more qubits available, we can add additional Tfactories to produce more Tstates. This saves runtime at the cost of more physical qubits. Having enough physical qubits available, we can increase the number of Tfactories until they produce enough Tstates for algorithm consumption without idling. This will give the shortest runtime of the algorithm. For example, the algorithm considered above could efficiently use up to 172251 Tfactories depending on the computing stack. This involves spending from 92.29% to 98.40% of its resources for Tstates distillation.
As shown in estimationfrontierwidgets.ipynb, to estimate resources required to run a Q# program, one has to run
result = qsharp.estimate(entry_expression, params)
where the “entry_expression” refers to the entry point of the program and params could cover multiple quantum stack configurations and estimation parameters as well.
When “estimateType”: “frontier” is set, the estimator searches for the whole frontier of estimates, otherwise, it looks for the shortest runtime solution only.
Executing the
EstimatesOverview(result)
command visualizes all the estimates found in result (frontier and individual as well) with the spacetime diagram and the summary table.
Selecting rows on the summary table or point on the spacetime diagram generates the space diagram and the detailed report:
“EstimatesOverview” supports optional parameters for custom color schemes on the spacetime diagram and custom series name for the summary table.
More tips and tricks for the “EstimatesOverview” and for supplementary visualization elements are available at estimationfrontierwidgets.ipynb.
Estimating resources for quantum algorithm executions goes beyond providing a single pair of numbers — the runtime and the number of physical qubits. It requires constructing and analyzing the entire frontier of tradeoffs between those objectives. The Azure Quantum Resource Estimator allows you to build and explore those tradeoff frontiers and more accurately evaluate your requirements. With this new data, you can determine if you need to improve your algorithm, develop new error correction codes, or explore alternate qubit technologies.
The Azure Quantum team is committed to continuous improvements in the Resource Estimator. This tool supports both our internal teams and external researchers in the pursuit of designing quantum computers.
Our primary focus is on enhancing the precision of estimates and offering expanded estimation capabilities.
We eagerly welcome your feedback on the specific custom options you require for estimating your quantum computer resources. Your insights will play a vital role in refining our tool, making it even more effective for the entire quantum community.
The post Exploring spacetime tradeoffs with Azure Quantum Resource Estimator appeared first on Q# Blog.
]]>The post Design Fault Tolerant Quantum Computing applications with the opensource Resource Estimator appeared first on Q# Blog.
]]>Quantum computing has the potential for widespread societal and scientific impact, and many applications have been proposed for quantum computers. The quantum community has reached a consensus that NISQ machines do not offer practical quantum advantage and that it is time to graduate to the next of the three implementation levels.
Unlike computing with transistors, basic operations with qubits are much more complicated and an order of magnitude slower. We now understand that practical quantum advantage will be achieved for smalldata problems that offer superpolynomial quantum speedup (see T. Hoefler et al, CACM 66, 8287). This includes, specifically, the simulation of quantum systems in quantum physics, chemistry, and materials science.
But at a basic level, there are still many remaining open questions: What are the most promising and useful quantum algorithms on which to build useful quantum applications? Which quantum computing architectures and qubit technologies can reach the necessary scale to run such quantum accelerated applications? Which qubit technologies are well suited to practical quantum supercomputers? Which quantum computing technologies are unlikely to achieve the necessary scale?
That’s why we need the Resource Estimator to help us answer these questions and guide today’s research and development toward logical qubit applications.
Achieving practical quantum advantage will require improvements and domain expertise at every level of the quantum computing stack. A unified opensource tool to benchmark solutions and collaborate across disciplines will speed up our path toward a quantum supercomputer: this is the premise of Azure Quantum Resource Estimator.
Whether you are developing applications, researching algorithms, designing language compilers and optimizers, creating new error correction codes, or working on R&D for faster, smaller and more reliable qubits, the Resource Estimator helps you assess how your theoretical or empirical enhancements can improve the whole stack.
As an individual researcher, you can leverage prebuilt options to focus on your area. If you are part of a team, you can work collectively at every level of the stack and see the results of your combined efforts.
The Resource Estimator is an estimation platform that lets you start with minimal inputs, abstracting the many specificities of quantum systems. If you require more control, you can adjust and explore a vast number of system characteristics.
The Resource Estimator can quickly explore thousands of possible solutions. This accelerates the development lifecycle and lets you easily review tradeoffs between computation time and number of physical qubits.
The table below summarizes some of the ways you can adapt the Resource Estimator to your needs, allowing you to specify both the description of the quantum system and to control the exploration of estimates. Explore all available parameters.
Describe your system  Explore and control estimates 


*Currently requires an Azure Subscription
If you are ready to get started, you can choose from:
Read more from the documentation.
To join the discussion or contribute to the development of the Resource Estimator, visit https://aka.ms/AQ/RE/OpenSource.
20240129 update: This feature is now available. Learn more from the Pareto frontier documentation.
Understanding the tradeoff between runtime and system scale is one of the more important aspects of resource estimation. To help you better understand and visualize the tradeoffs, the Resource Estimator will soon provide fully automated exploration and graphics, such as the one below:
Make sure to subscribe to the Q# blog to be notified of this feature’s availability.
The post Design Fault Tolerant Quantum Computing applications with the opensource Resource Estimator appeared first on Q# Blog.
]]>The post Announcing v1.0 of the Azure Quantum Development Kit appeared first on Q# Blog.
]]>As outlined in an earlier blog post, this is a significant rewrite over the prior QDK with an emphasis on speed, simplicity, and a delightful experience. Review that post for the technical details on how we rebuilt it, but at a product level the rewrite has enabled us to make some incredible improvements that exceeded the expectations we set out with, some highlights being:
And much more! This post will include lots of video clips to try and highlight some of these experiences (all videos were recorded in real time).
For the fastest getting started experience, just go to https://vscode.dev/quantum/playground/ . The QDK extension for VS Code works fully in VS Code for the Web, and this URL loads an instance of VS Code in the browser with the QDK extension preinstalled, along with a virtual file system preloaded with some common quantum algorithms. You can experiment here, then simply close the browser tab when done, without installing anything or accessing any files on your local machine.
If using VS Code on your local machine (or using https://vscode.dev directly), then installing the extension is a snap. Simply go to the VS Code Extension Marketplace, search for “QDK”, and install the “Azure Quantum Development Kit” extension published by “Microsoft DevLabs” (direct link). The extension is lightweight with no dependencies and will install in seconds, as shown below.
Once the extension is running, you can open a Q# file (with a .qs extension) and start coding. The below clip demonstrates how to create a new Q# file, use one of the sample ‘snippets’ to quickly insert a wellknown algorithm, and then use the built in simulator to run the code and see the output (including quantum state dumps and debug messages).
(Note: If unfamiliar with Q#, or quantum development in general, then the Quantum Katas are a great way to learn in an interactive AI assisted experience).
We believe the true power of quantum computing will be realized once we reach “scalable quantum computing”, and the Q# language was designed for this. It includes both higher level abstractions to more naturally express quantum operations, as well as being a typed language to help develop, refactor, and collaborate on more complex programs. (See the “Why do we need Q#” blog post for more background).
For this release we’ve invested heavily on the editor features developers expect from a modern and productive language. This includes:
The Q# editor provides completion lists, autoopen of namespaces, signature help, hover information, goto definition, rename identifier, syntax and typechecking errors, and more! All behave as developers familiar with other strongly typed languages such as Rust, C#, TypeScript, etc. have come to expect.
We’ve designed the experience to be as smooth as possible and to work as fast as you can type. Many of these features are available not only while editing Q# files directly, but also when writing Q# code in Jupyter Notebook cells, as shown in the clip below.
A quantum simulator is critical when developing quantum programs, and the QDK includes a sparse simulator that enables the output of diagnostic messages and quantum state as it runs in both the VS Code extension and the Python package.
The VS Code integration takes this up a notch, and the QDK brings a powerful debugging experience to Q# development. You can set breakpoints, step in and out of operations, and view both the quantum and classical state as you step through the code. It also includes some quantumspecific goodness, such as stepping through loops & operations backwards when running the generated adjoint of an operation. We’re very excited about the productivity this can unlock, and some of the ideas for where we could take it even further in future releases.
Today’s quantum hardware is still quite limited in terms of practical application, and we are still in what is termed the “Noisy Intermediate Scale Quantum” era. We consider this Level 1 in a roadmap to a quantum supercomputer. The industry is making great strides towards Level 2 currently, when it will become possible to start using “logical qubits” on real hardware. Achieving practical quantum advantage for useful problems will require logical qubits.
As with early classical computers, there will be considerable resource constraints for a number of years. (My first computer had 16KB of RAM and a cassette tape for storage!). Developing code that can squeeze the most out of the hardware will be critical to building useful applications and to advancing the field generally. There are numerous factors such as qubit types, error correction schemas, layout & connectivity, etc. that determine how a program using logical qubits maps to physical resource requirements.
Over the past year we’ve built numerous capabilities into our Azure Quantum service to assist with Resource Estimation (see the docs for details). With this release of the QDK, we’re bringing many of those capabilities directly into the client, enabling a rapid getting started experience and a very fast innerloop to enable quantum developers to experiment and view resource requirements for their code as quickly as possible. This is an area we will continue to invest in to add capabilities for developers & researchers throughout the quantum stack to make rapid progress and develop new insights.
In the below clip showing VS Code in the browser, the “Calculate Resource Estimates” command is run to view the estimates for various qubit types and other parameters. Once complete, this brings up a comparison table, and as rows are selected a visualization chart and detailed table of results is shown for the selected hardware configuration.
If you’d like to try this exact code in the Resource Estimator, you can visit the code sharing link used in the video. (Note this code is designed for resource estimation and is unlikely to finish if you try to actually run it in the simulator).
The QDK extension in VS Code enables you to connect to a Quantum Workspace in your Azure Subscription. You can then directly submit your Q# program from the editor to one of our hardware partners. You can see the status of the jobs and download the results when completed. This provides for a simple and streamlined experience, reducing the need to switch to CLI tools or Python code to work with the service. (Though using the service via those methods is still fully supported).
Current quantum hardware is limited compared to simulator capabilities, and thus the compiler must be set to the ‘base’ profile in the QDK for programs to be runnable on a real quantum machine. If the compiler is set to ‘base’ profile and a program tries to use unavailable capabilities, then the editor will immediately show an error, avoiding the need to submit potentially invalid code and then wait to see if an error occurs from the service.
Note: VS Code had already signedin and authenticated with the subscription account in this recording. On first run you may need to authenticate with the Microsoft Account for the subscription and consent to access.
There are more editor features than can be covered here, including builtin histograms, project support, viewing the QIR for a compiled program, etc. See the documentation for more details.
Much work in the quantum space happens via Python in Jupyter Notebooks. Beyond the rich tooling for working with Q# directly, we’ve also revamped and refined our Python packages and Jupyter Notebooks support.
For general Q# simulation and compilation all you need is “pip install qsharp”. This package is only a couple of MB with no dependencies, and compiled to binary wheels for Windows, Mac, and Linux for x64 and ARM64 – so installation should be pain free and near instant in most environments. If you will be using Jupyter Notebooks then you may also want to install the “qsharpwidgets” package for some nice visualizations for resource estimation and histograms.
If you will be using JupyterLab in the browser, then install the ‘qsharpjupyterlab’ package to get Q# cell syntax highlighting. However, we recommend using the VS Code support for Jupyter Notebooks as this provides some of the rich language service features outlined above when working with Q#.
You can use the VS Code command “Create an Azure Quantum Notebook” to generate a sample Jupyter Notebook. If you have connected to an Azure Quantum Workspace already as outlined above, then this Notebook will be prepopulated with the correct Azure Quantum Workspace connection settings.
If you were using the prior QDK, which we now refer to as the ‘Classic QDK’ (with this release being the ‘Modern QDK’), then this will be a substantial change. While we have endeavored to make the Q# code compatible where we could, the new architecture removes a lot of the prior project infrastructure, such as .csproj based projects, NuGet package distribution, C# integration, etc. Existing projects and samples will need to be ported to move from the ‘Classic QDK’ to the ‘Modern QDK’. The ‘Classic QDK’ will still be available to run existing code, but the ‘Modern QDK’ is the basis for future releases and we recommend moving to it when you can.
While it really has been fun and rewarding getting to 1.0, it is the beginning of a journey. We have many new features and improvements we are keen to start tackling, including improvements to the Q# language, more powerful resource estimation capabilities, package management for code sharing, advanced compiler capabilities (such as better hardware targeting), richer visualizations, better documentation & samples, and much more.
We’d love to have your input in these decisions, as you are who we are building these tools for, so please do get involved and give us your feature requests and feedback on our issue tracker at https://github.com/microsoft/qsharp/issues . (And if you do encounter any bugs with the QDK, this is the place to log those too!).
The team is very excited to reach this milestone, and hope you have as much fun using it as we did building it. Please do give it a try, give us your feedback, and tell us what you’d like to see next!
The post Announcing v1.0 of the Azure Quantum Development Kit appeared first on Q# Blog.
]]>The post Interning at Microsoft Quantum – 2024 appeared first on Q# Blog.
]]>We are excited to announce that applications for Microsoft Quantum’s research internships 2024 are open!
Apply for the Microsoft Quantum research internship
We encourage early applications!
Research internships target graduate students currently enrolled in a Master’s or a PhD program (note that you have to be enrolled as a student both at the time of application and at the time of the actual internship). These internships focus on the exploration of new research directions under guidance of fulltime researchers on our team. We are seeking candidates specializing in areas such as quantum algorithms, quantum chemistry, quantum error correction, quantum benchmarking, physics device modeling and characterization, and machine learning.
Here are several highlights from this year’s research internship projects:
You can find additional examples of research internship projects from earlier years and the papers written about them in the 2022 internships announcement.
Internships will be hosted at our offices in Redmond, WA, USA. International students are welcome to apply! (All interns must be able to obtain US work authorization.)
Our internships are a great opportunity to get familiar with the research done in the quantum industry and contribute to the work done by the Microsoft Quantum team. They also offer a lot of fun experiences as part of the greater Microsoft Internship program, from yearly puzzle events such as the Microsoft Puzzleday and the Microsoft Intern Game to the social events where you can meet your fellow interns and researchers from all over the company and learn about the variety of career paths available in different disciplines!
The post Interning at Microsoft Quantum – 2024 appeared first on Q# Blog.
]]>The post Defining logical qubits: Criteria for Resilient Quantum Computation appeared first on Q# Blog.
]]>The next step toward practical quantum advantage, and Level 3 Scale, is to demonstrate resilient quantum computation on a logical qubit. Resilience in this context means the ability to show that quantum error correction helps—rather than hinders—nontrivial quantum computation. However, an important element of this nontriviality is the interaction between logical qubits and the entanglement it generates, which means resilience of just one logical qubit will not be enough. Therefore, demonstrating two logical qubits performing an errorcorrected computation that outperforms the same computation on physical qubits will mark the first demonstration of a resilient quantum computation in our field’s history.
Before our industry can declare victory on reaching Level 2 Resilient Quantum Computing, by performing such a demonstration on a given quantum computing hardware, it’s important to agree on what this entails, and the path from there to Level 3 Scale.
The most meaningful definition of a logical qubit hinges on what one can do with that qubit – demonstrating a qubit that can only remain idle, that is, be preserved in memory, is not as meaningful as demonstrating a nontrivial operation. Therefore, we define a logical qubit such that it initially allows some nontrivial, encoded computation to be performed on it.
A significant challenge in formally defining a logical qubit is accounting for distinct hardware; for example, the definition should not favor one hardware over another. To address this, we propose a set of criteria that marks the entrance into the resilient level of quantum computation. In other words, these are the criteria for calling something a “logical qubit”.
Entrance criteria to Level 2
Graduating to Level 2 resilient quantum computing is achieved when fewer errors are observed on the output of a logical, errorcorrected quantum circuit than on the analogous physical circuit without error correction.[1] We also require that a resilient level demonstration include some uniquely “quantum” feature. Otherwise, the demonstration reduces to a simply novel demonstration of probabilistic bits.
Arguably the most natural “quantum” feature to demonstrate in this regard is entanglement. A demonstration of the resilient level of quantum computation should then satisfy the following criteria:
Upon satisfaction of these criteria, the term “logical qubit” can then be used to refer to the encoded qubits involved.
The distinction between the Resilient and Scale levels is worth emphasizing — a proof of principle demonstration of resiliency must be convincing, but it does not require a fully scaled machine. For this reason, a resilient level demonstration may use certain forms of postselection. Postselection here means the ability to accept only those runs that satisfy specific criteria. Importantly, the chosen postselection method must not replace errorcorrection altogether, as errorcorrection is central to the type of resiliency that Level 2 aims to demonstrate.
Measuring progress across Level 2
Once entrance to the Resilient Level is achieved, as an industry we need to be able to measure continued progress toward Level 3. Not every type of quantum computing hardware will achieve Level 3 Scale; the requirements to reach practical quantum advantage at Level 3 include achieving upwards of 1000 logical qubits operating at a megarQOPS with logical error rates better than 10^{12}. And so it is critical to be able to understand advancements within Level 2 toward these requirements.
Inspired in part by DiVincenzo’s criteria, we propose to measure progress along four axes: universality, scalability, fidelity, composability. For each axis we offer the following ideas on how to measure it, with hopes the community will build on them:
Criteria to advance from Level 2 to Level 3 Scale
The exit of the resilient level of logical computation will be marked by large depth, high fidelity computations involving upwards of hundreds of logical qubits. For example, a logical, faulttolerant computation on ~100 logical qubits or more with a universal set of composable logical operations with an error rate of ~10^{8} or better will be necessary. At Level 3, performance of a quantum supercomputer can then be measured by reliable quantum operations per second (rQOPS). Ultimately, a quantum supercomputer will be achieved once the machine is able to demonstrate 1000 logical qubits operating at a megarQOPS with logical error rate of 10^{12} or better.
It’s no doubt an exciting time to be in quantum computing. Our industry is at the brink of reaching the next implementation level, Level 2, which puts our industry on path to ultimately achieving practical quantum advantage. Together as a community we have an opportunity to help measure progress across Level 2, and to introduce benchmarks for the industry. If you have ideas or feedback on criteria to enter Level 2, or how to measure progress, we’d love to hear from you.
[1] Our criteria build on and complement criteria of both DiVincenzo (DiVincenzo, David P. (20000413). “The Physical Implementation of Quantum Computation”. Fortschritte der Physik. 48 (9–11): 771–783) and Gottesman (Gottesman, Daniel. (201610). “Quantum fault tolerance in small experiments”. https://arxiv.org/abs/1610.03507), who have previously outlined important criteria for achieving quantum computing and its fault tolerance.
The post Defining logical qubits: Criteria for Resilient Quantum Computation appeared first on Q# Blog.
]]>The post Calculating resource estimates for cryptanalysis appeared first on Q# Blog.
]]>This blog offers an inside look into the computation of these estimates. Our resource estimator supports various input formats for quantum programs, including Q# and Qiskit, which are then translated into QIR, the Quantum Intermediate Representation. In addition to customizable qubit parameters, we also utilize predefined models in our experience. To perform resource estimation of physical hardware components from logical resource counts (which do not take the overhead for quantum error correction into account) extracted from papers, we utilize a specialized resource estimation operation in Q#. Furthermore, we have developed an algorithm in Rust and translated it into QIR by leveraging the LLVM framework, which also powers QIR. The following three sections delve into the specific details for each encryption algorithm addressed in our interactive experience.
In the experience we compare the following three cryptographic algorithms in different key strengths (for elliptic curve cryptography, these correspond to concrete prime field Weierstrass curves, which you can lookup via the link):
Algorithm  Standard  Enhanced  Highest 

Elliptic curve  P256  P384  P521 
RSA  2048  3072  4096 
AES  128  192  256 
In the estimation, we assume that we lower the quantum algorithm to a sequence of physical quantum gates. For these we assume the following two choices of qubits and error rates. The values are based on some predefined qubit parameters available in the resource estimator. The Majorana and gatebased predefined parameters in the resource estimator correspond to topological and superconducting qubit types in the experience, respectively.
Qubit type and error rate  Majorana (reasonable)  Majorana (optimistic)  Gatebased (reasonable)  Gatebased (optimistic) 

Measurement time  100 ns  100 ns  100 ns  100 ns 
Gate time  100 ns  100 ns  50 ns  50 ns 
Measurement error rate  0.0001  0.000001  0.001  0.0001 
Gate error rate  0.05  0.01  0.001  0.0001 
Elliptic curve cryptography (ECC) is a publickey cryptography approach based on the algebraic structure of elliptic curves. The approach requires smaller key sizes compared to approaches such as RSA, while providing an equal security against classical cryptanalysis methods. The paper Improved quantum circuits for elliptic curve discrete logarithms (arXiv:2001.09580) describes a quantum algorithm to solve the elliptic curve discrete logarithm problem (ECDLP) based on Shor’s algorithm. We make use of the Q# operation AccountForEstimates
(also find details on how to use the operation) that allows us to derive physical resource estimates from previously computed logical ones. This operation is very helpful when logical estimates have already been computed, as for example in this paper and listed in there as part of Table 1.
From that table we extract the relevant metrics, which are the number of T gates, the number of measurement operations, and the number of qubits. The other metrics are not relevant for the computation, since the physical resource estimation relies on Parallel Synthesis Sequential Pauli Computation (PSSPC, Appendix D in arXiv:2211.07629), which commutes all Clifford operations and replaces them by multiqubit Pauli measurements. The paper discusses various optimization flags in the implementation to minimize the logical qubit count, T count, or the logical depth. We found that the physical resource estimates are best, both for physical qubits and runtime, when using the option to minimize qubit count. The following Q# program includes the estimates for the considered key sizes 256, 384, and 521.
open Microsoft.Quantum.ResourceEstimation;
operation ECCEstimates(keysize: Int) : Unit {
if keysize == 256 {
use qubits = Qubit[2124];
AccountForEstimates([
TCount(7387343750), // 1.72 * 2.0^32
MeasurementCount(118111601) // 1.76 * 2.0^26
], PSSPCLayout(), qubits);
} else if keysize == 384 {
use qubits = Qubit[3151];
AccountForEstimates([
TCount(25941602468), // 1.51 * 2.0^34
MeasurementCount(660351222) // 1.23 * 2.0^29
], PSSPCLayout(), qubits);
} else if keysize == 521 {
use qubits = Qubit[4258];
AccountForEstimates([
TCount(62534723830), // 1.82 * 2.0^35
MeasurementCount(1707249501) // 1.59 * 2.0^30
], PSSPCLayout(), qubits);
} else {
fail $"keysize {keysize} is not supported";
}
}
We can estimate this Q# program by submitting it to an Azure Quantum workspace using the azure_quantum Python package. To do so, we are setting up a connection to an Azure Quantum workspace (Learn how to create a workspace). You can find the values for resource_id and location in the Overview page of the Quantum workspace. (The complete code example is available on GitHub)
workspace = Workspace(
resource_id="",
location=""
)
estimator = MicrosoftEstimator(workspace)
We then define the input parameters for the job. In there we specify the key size, here 256. We use batching to submit multiple target parameter configurations at once. In here we specify the four configurations that correspond to the realistic and optimistic settings for both gatebased and Majorana qubits. For all configurations, we set the error budget to 0.333, i.e., we compute physical resource estimates considering a success rate about 67%.
params = estimator.make_params(num_items=4)
params.arguments["keysize"] = 256
# Error budget
params.error_budget = 0.333
# Gatebased (realistic)
params.items[0].qubit_params.name = QubitParams.GATE_NS_E3
# Gatebased (optimistic)
params.items[1].qubit_params.name = QubitParams.GATE_NS_E4
# Majorana (realistic)
params.items[2].qubit_params.name = QubitParams.MAJ_NS_E4
params.items[2].qec_scheme.name = QECScheme.FLOQUET_CODE
# Majorana (optimistic)
params.items[3].qubit_params.name = QubitParams.MAJ_NS_E6
params.items[3].qec_scheme.name = QECScheme.FLOQUET_CODE
Finally, we create a job by submitting the Q# operation together with the input parameters, and retrieve the results after it has completed. We then use the result object to create a summary table using the summary_data_frame
function. The table contains various entries, but in this example, we only print the numbers of physical qubits and physical runtimes, the same that are plotted in the experience on the Azure Quantum website.
job = estimator.submit(ECCEstimates, input_params=params)
results = job.get_results()
table = results.summary_data_frame(labels=[
"Gatebased (reasonable)",
"Gatebased (optimistic)",
"Majorana (reasonable)",
"Majorana (optimistic)"
])
print()
print(table[["Physical qubits", "Physical runtime"]])
The output is as follows:
Physical qubits Physical runtime
Gatebased (reasonable) 5.87M 21 hours
Gatebased (optimistic) 1.54M 11 hours
Majorana (reasonable) 3.69M 8 hours
Majorana (optimistic) 1.10M 4 hours
The estimates in the table are formatted for better readability. You can also retrieve the nonformatted values, e.g., the number of physical qubits and physical items for the first configuration (gatebased realistic) are access with results[0]["physicalCounts"]["physicalQubits"]
and results[0]["physicalCounts"]["runtime"]
, respectively.
RSA is one of the oldest, yet widely used, publickey cryptography approaches. The paper How to factor 2048 bit RSA integers in 8 hours using 20 million noisy qubits (arXiv:1905.09749) describes an implementation to factor RSA integers based on stateoftheart quantum operations for phase estimation and quantum arithmetic. The code is mostly similar to the code we are using for ECC estimates described above. However, we implemented the algorithm in Rust, and compiled it to LLVM. Therefore, we submit the QIR, which is the LLVM output, directly to the Azure Quantum Resource Estimator. (The complete code example is available on GitHub)
import urllib.request
bitcode = urllib.request.urlopen("https://aka.ms/RE/someniceuri").read()
The entry point in this implementation takes 4 input arguments, the actual product (in this sample the 2048bit RSA integer from the RSA challenge), a generator, and two parameters to control windowed arithmetic in the implementation. We take its values from the paper, in which 5 is suggested a good value for both of them. Then, we configure the qubit parameters and QEC scheme as above in the input parameters, and submit them together with the bitcode to the resource estimator.
params = estimator.make_params(num_items=4)
params.arguments["product"] = "25195908475657893494027183240048398571429282126204032027777137836043662020707595556264018525880784406918290641249515082189298559149176184502808489120072844992687392807287776735971418347270261896375014971824691165077613379859095700097330459748808428401797429100642458691817195118746121515172654632282216869987549182422433637259085141865462043576798423387184774447920739934236584823824281198163815010674810451660377306056201619676256133844143603833904414952634432190114657544454178424020924616515723350778707749817125772467962926386356373289912154831438167899885040445364023527381951378636564391212010397122822120720357"
params.arguments["generator"] = 7
params.arguments["exp_window_len"] = 5
params.arguments["mul_window_len"] = 5
# specify error budget, qubit parameter and QEC scheme assumptions
params.error_budget = 0.333
# ...
job = estimator.submit(bitcode, input_params=params)
results = job.get_results()
The code for evaluating the data is the same and returns the following table:
Physical qubits Physical runtime
Gatebased (reasonable) 25.17M 1 days
Gatebased (optimistic) 5.83M 12 hours
Majorana (reasonable) 13.40M 9 hours
Majorana (optimistic) 4.18M 5 hours
We can use the same program to compute resource estimates for other RSA integers, including the RSA challenge numbers RSA3072 and RSA4096, whose estimates are part of the cryptography experience on the Azure Quantum website.
The Advanced Encryption Standard (AES) is a symmetrickey algorithm and a standard for the US federal government. In order to obtain the physical resource estimates for breaking AES, we started from the logical estimates in Implementing Grover oracles for quantum key search on AES and LowMC (arXiv:1910.01700, Table 8), with updates on the qubit counts suggested in Quantum Analysis of AES (Cryptology ePrint Archive, Paper 2022/683, Table 7). In principle, we can follow the approach using the AccountForEstimates
function as we did for ECC. This operation and the logical counts in the Azure Quantum Resource Estimator are represented using 64bit integers for performance reasons, however, for the AES estimates we need 256bit integers. As a result we used an internal nonproduction version of the resource estimator that can handle this precision. Further details can be made available to researchers if you run into similar precision issues in your resource estimation projects.
The Azure Quantum Resource Estimator can be applied to estimate any quantum algorithm, not only cryptanalysis. Learn how to get started in Azure Quantum today with the Azure Quantum documentation. In there you find how to explore all the rich capabilities in various notebooks, with applications in quantum chemistry, quantum simulation, and arithmetic. You can learn how to submit your own quantum programs written in Q#, Qiskit, or directly provided as QIR, as well as how to set up advanced resource estimation experiments and apply customizations such as space/time tradeoffs.
The post Calculating resource estimates for cryptanalysis appeared first on Q# Blog.
]]>The post Azure Quantum Integrated Hybrid unlocks algorithmic primitives appeared first on Q# Blog.
]]>Second, they need to be kept stable which means that error correction will be needed to combat the fundamental noise processes that disrupt the quantum computer. To create such stability basically means to forge the underlying noisy physical qubits into more stable logical qubits and to use faulttolerant methods to implement operations. Microsoft’s unique topological qubit design has stability built in at the hardware level, and in turn will require less overhead to realize logical, faulttolerant computation with a quantum error correcting code. No matter the underlying qubit design, advanced classical computational power will be required to keep a quantum machine stable, along with the underlying quantum error correcting code.
Finally, a quantum supercomputer will necessarily be hybrid, both in its implementation but also in the solutions it runs. After all, all quantum algorithms require a combination of both quantum and classical compute to produce a solution. And it is in this careful design of the classical and quantum compute, together, where we will see future innovation and new types of solutions emerging. Hybrid quantum computing enables the seamless integration of quantum and classical compute together. This is an important part for achieving our path to quantum at scale and to integrate our quantum machine alongside supercomputing classical machines in the cloud.
Implementing hybrid quantum algorithms
Integrated Hybrid in Azure Quantum allows to mix classical and quantum code together already today. “This opens the door to a new generation of hybrid algorithms that can benefit from complex sidecomputations that happen while the quantum state of the processor stays coherent”, Natalie Brown, Senior Advanced Physicist at Quantinuum.
A visualization of the protocol is shown here:
The number of repetitions of the loop of the middle block depends on the measurement outcome and cannot be determined in advance, i.e., this program cannot be implemented as a static quantum circuit. Once the measurements of the 4 lower qubits indicate the result “0000”, the top most qubit is passed on as the output of the computation. In case any other syndrome is measured, the 5 qubits are reset and the procedure starts over.
What these two quantum algorithms both have in common is that they require complex control flow, including measurements that are applied during the computation while some part of the quantum computer remains coherent.
Experimental results
Recently, as shared in a paper posted on arxiv.org, a team of researchers from Microsoft and Quantinuum developed and ran MSD and RUS algorithms on the H1Series in Azure Quantum.
The programs for the applications were written in Q# and were then compiled to the Quantum Intermediate Representation (QIR), which is based on LLVM, a representation widely used in classical compilers. QIR allows to represent quantum and classical logic using function declarations, basic blocks, and control flow instructions. QIR also enables us to use existing LLVM tools and techniques to analyze and optimize the program logic (eliminating unnecessary instructions and reducing transport steps), such as constant folding, loop unrolling, and dead code elimination.
Quantinuum’s H1Series quantum computer leverages QIR in a powerful way: the Quantinuum quantum computer allows for hybrid classical/quantum programs to be executed. On the classical side, rich control flow is supported through integration with QIR including:
These primitive building blocks can be used to orchestrate computations such as MSD and RUS.
MSD protocol based on the [[5,1,3]] quantum errorcorrecting code
The left side of the following figure shows the expectation values for the actual run on Quantinuum H11 system, as well as the results of a simulation run of the H11 emulator (denoted as H11E). We plot the expectations with respect to three different Pauli frames X, Y, and Z which completely characterize the state of the qubits. The boxes indicate the ideal result which is only achievable for gates that are completely noiseless. The right side of the figure shows the probability of distillation succeeding at different limits, running both on the H11 system and the H11E emulator. The dashed black line indicates the probability of success expected for a perfect state preparation on a noiseless device.
Twostage RUS circuit
Researchers demonstrated the viability of this RUS protocol using QIR on Quantinuum’s QCCD simulator, which models realistic noise and errors in trapped ion systems, and by running it on the actual device. QIR was used to express four different versions of the RUS circuit, each using a different combination of recursion or loops, and Q# or OpenQASM as the source language.
As shown in the figure on the left above, the RUS protocol shows best performance when the Q# to QIR compiler is used and applied to a Q# implementation that realizes the RUS protocol as a for loop. As the iteration limit is increased, there is a clear drop in the performance for the recursion implementations, while the performance of loop implementations closely tracks the hand optimized OpenQASM 2.0++ code which is only achievable for gates that are completely noiseless.
A full Q# code sample that runs in Azure Quantum and that implements this hybrid program can be found at https://aka.ms/AQ/Samples/RUS.
In this blog post, we have shown how Q# can be used to implement and optimize faulttolerant protocols that use a hybrid approach of quantum and classical logic. We have presented two examples of such protocols, MSD and RUS circuits, and demonstrated their execution and performance through Azure Quantum on Quantinuum’s H1 series system that runs on an ion trap quantum chargecoupled device architecture platform. We have also shown how QIR can leverage the LLVM toolchain to enable interoperability and portability across different quantum hardware platforms.
#quantumcomputing #quantumcloud #azurequantum #quantinuum #QIR
The post Azure Quantum Integrated Hybrid unlocks algorithmic primitives appeared first on Q# Blog.
]]>The post Introducing the Azure Quantum Development Kit Preview appeared first on Q# Blog.
]]>The Azure Quantum team is excited to announce the initial preview of the new Azure Quantum Development Kit (or QDK for short). This has been entirely rebuilt using a new codebase on a new technology stack, and this blog post outlines the why, the how, and some of the benefits of doing so.
The “tl;dr” is that we rewrote it (mostly) in Rust which compiles to WebAssembly for VS Code or the web, and to native binaries for Python. It’s over 100x smaller, over 100x faster, much easier to install & use, works fully in the browser, and is much more productive & fun for the team to work on.
Give it a try via the instructions at https://github.com/microsoft/qsharp/wiki/Installation, and read on for the details…
The existing Quantum Development Kit has grown organically over several years, first shipping in late 2017. Being in a fastevolving space, it naturally evolved quickly too, incorporating many features and technologies along the way.
As we reflected on what we’d like the QDK to be going forward, it was clear some of the technologies and features would be a challenge to bring along, and that a rewrite might be the best solution. Some of our goals were:
Many quantum developers don’t come from a .NET background, being mostly familiar with Python. However, the existing QDK exposes much of the .NET ecosystem to developers, providing an additional learning curve. Some examples being the MSBuildbased project & build system and NuGet package management. When working with customers on issues, they will sometimes be confused when needing to edit .csproj files, run commands such as “dotnet clean”, or troubleshoot NuGet packages for their Q# projects.
Providing a delightful & simplified experience, from installation to learning to coding to troubleshooting to submitting jobs to quantum computers is our primary goal.
The existing QDK has some code and dependencies that are platform specific. While these were not problems initially, as platforms have evolved this has caused challenges. For example, Apple Silicon and Windows on ARM64 are not fully supported in the existing QDK. We also wanted the tools to run in the browser, such as in our new https://quantum.microsoft.com portal, or in a https://vscode.dev hosted editor.
With the runtime dependencies in the existing QDK, the full set of binaries that need to be installed has grown quite large. Besides the .NET runtime itself, there are some F# library dependencies in the parser, some C++ multithreading library dependencies in the simulator, some NuGet dependencies for the Q# project SDK, etc. In total, this can add up to over 180MB when installed locally after building a simple Q# project. Coordinating the download and initialization of the binaries, as well as the complexity of the interactions between them, can often lead to performance & reliability issues.
As the existing QDK had come to span multiple repositories, multiple build pipelines, multiple languages & runtimes (each often with their own set of dependencies), and multiple distribution channels, the speed at which we could check in a feature or produce a release has slowed, and a great deal of time is spent on codebase maintenance, security updates, and troubleshooting build issues. To provide a productive (and enjoyable) engineering system going forward, dramatic simplification was needed.
Around the end of 2022 we set about prototyping some ideas, which grew into the new QDK we are releasing in preview today. The basic philosophy behind engineering the new QDK is as follows:
By writing as much as possible in Rust, we have a codebase that can easily target native binaries for any platform supported by the Rust compiler (which we build into our Python wheels) and build for WebAssembly (via wasmbindgen) to run in the browser. With a focused codebase, the resulting binaries are very small & fast too.
There is a cost to every dependency you take. The cost to learn it, the cost to install it (i.e., build times and disk space), the cost to update & maintain it (i.e., as security issues are reported), the cost to final product size, and so on. Sometimes these costs are worth paying for what you get in return, but the taxes accumulate over time. We are very mindful and minimal in the dependencies we take.
For our new codebase, we have limited the languages used to:
For those three languages, we keep dependencies to a minimum, nearly all of which can be seen in the Cargo.toml and package.json files at the root of the repo.
The below highlevel diagram shows roughly how this all fits together in our VS Code extension, Python packages, and for general web site integration.
Setting up a build environment for developers (or CI agents) should be fast. For the new codebase, currently you just install Rust, Python, and Node.js, clone one repo, and run one Python build script.
Developing the product should be fast. When working on the core compiler Rust code, the development innerloop is often as fast as clicking ‘run’ on a unit test in VS Code via the excellent “rustanalyzer” extension. When working on the TypeScript code for the VS Code extension, with “esbuild” running in watchmode it’s as quick as saving the changes and pressing F5 to launch the Extension Development Host.
The build infrastructure should be easy to keep working. Our CI and build pipeline use the same ‘build.py’ script in the root of the repo that developers use locally to build & test.
Last but certainly not least, is to avoid the extraneous. Every feature added should have a clear need and add significant value. This provides for a more streamlined & intuitive product for the customer, and a less complex codebase to do further development in.
We’re pretty proud of the result. It’s no exaggeration to say the new Azure Quantum Development Kit is 100x smaller, 100x faster, available on Windows, Mac, Linux, and the web, and is a greatly simplified user experience.
As outlined above, the existing QDK results in over 180MB of binaries locally once a project is fully built and all dependencies installed. The VSIX package for our new VS Code extension is currently around 700KB and includes everything needed for Q# development in VS Code. (If you ‘pip install’ our Python packages to work with Q# via Python, that’s around another 1.3MB). Installation typically takes a couple of seconds with no other dependencies. If you have VS Code, (and Python/Jupyter if desired), you’re ready to install.
We have examples of programs that would take minutes to compile in the existing QDK. Those same programs are now measured in milliseconds in the new QDK. The language service is so fast, most operations are done on every keystroke and feel instant. The simulator can run 1000s of ‘shots’ per second for many common algorithms on a good laptop.
The build pipelines for the existing QDK take between 2 – 3 hours to complete, are fragile, and issues often require coordinated checkins across multiple repos. For the new QDK, all code is in one repo, and we build, test, and push live to our online playground in around 10 mins on every commit to main. Our publishing pipeline uses largely the same script.
We’ve built an extremely fast & reliable installation, language service, compiler, and debugger. Oh, and it all works inside the browser too!
A couple of years ago VS Code introduced VS Code for the Web (https://code.visualstudio.com/docs/editor/vscodeweb), with the ability to run the IDE in a browser with no local install, such as at https://vscode.dev or by pressing “.” when in a GitHub repo. By building our extension entirely as a web extension ALL our features run equally well in VS Code desktop or in the browser.
By way of example, the below screenshot shows loading the editor in the browser by visiting https://vscode.dev, running a Q# file under the debugger, viewing the quantum simulator output in the Debug Console, while also signed in to an Azure Quantum Workspace shown in the Explorer sidebar (to which the current program could be submitted) – all without anything needing to be installed on the local machine.
We think the improvements in the user experience for the new QDK really are a quantum leap (bad pun intended).
This is an early preview, and we still have several features to add before we get to our ‘stable’ release, some of the main ones being:
Once the core product is solid, we have a laundry list of further features and Q# language improvements we want to get to, which you can view and contribute to on our GitHub repo.
The existing QDK (https://learn.microsoft.com/enus/azure/quantum/installoverviewqdk) is still fully supported and should be used if the new QDK Preview doesn’t meet your needs or is changing too frequently as we iterate towards our stable release.
We’d love for you to give it a try and give us your feedback. The installation guide and other getting started documentation is currently on our GitHub wiki at https://github.com/microsoft/qsharp/wiki/Installation. You can report any issues, weigh in on feature requests, or contribute code on that same GitHub repo.
The post Introducing the Azure Quantum Development Kit Preview appeared first on Q# Blog.
]]>