The Azure Quantum team is excited to announce the initial preview of the new Azure Quantum Development Kit (or QDK for short). This has been entirely rebuilt using a new codebase on a new technology stack,

The post Introducing the Azure Quantum Development Kit Preview appeared first on Q# Blog.

]]>The Azure Quantum team is excited to announce the initial preview of the new Azure Quantum Development Kit (or QDK for short). This has been entirely rebuilt using a new codebase on a new technology stack, and this blog post outlines the why, the how, and some of the benefits of doing so.

The “tl;dr” is that we rewrote it (mostly) in Rust which compiles to WebAssembly for VS Code or the web, and to native binaries for Python. It’s over 100x smaller, over 100x faster, much easier to install & use, works fully in the browser, and is much more productive & fun for the team to work on.

Give it a try via the instructions at https://github.com/microsoft/qsharp/wiki/Installation, and read on for the details…

The existing Quantum Development Kit has grown organically over several years, first shipping in late 2017. Being in a fast-evolving space, it naturally evolved quickly too, incorporating many features and technologies along the way.

As we reflected on what we’d like the QDK to be going forward, it was clear some of the technologies and features would be a challenge to bring along, and that a re-write might be the best solution. Some of our goals were:

Many quantum developers don’t come from a .NET background, being mostly familiar with Python. However, the existing QDK exposes much of the .NET ecosystem to developers, providing an additional learning curve. Some examples being the MSBuild-based project & build system and NuGet package management. When working with customers on issues, they will sometimes be confused when needing to edit .csproj files, run commands such as “dotnet clean”, or troubleshoot NuGet packages for their Q# projects.

Providing a delightful & simplified experience, from installation to learning to coding to troubleshooting to submitting jobs to quantum computers is our primary goal.

The existing QDK has some code and dependencies that are platform specific. While these were not problems initially, as platforms have evolved this has caused challenges. For example, Apple Silicon and Windows on ARM64 are not fully supported in the existing QDK. We also wanted the tools to run in the browser, such as in our new https://quantum.microsoft.com portal, or in a https://vscode.dev hosted editor.

With the runtime dependencies in the existing QDK, the full set of binaries that need to be installed has grown quite large. Besides the .NET runtime itself, there are some F# library dependencies in the parser, some C++ multi-threading library dependencies in the simulator, some NuGet dependencies for the Q# project SDK, etc. In total, this can add up to over 180MB when installed locally after building a simple Q# project. Coordinating the download and initialization of the binaries, as well as the complexity of the interactions between them, can often lead to performance & reliability issues.

As the existing QDK had come to span multiple repositories, multiple build pipelines, multiple languages & runtimes (each often with their own set of dependencies), and multiple distribution channels, the speed at which we could check in a feature or produce a release has slowed, and a great deal of time is spent on codebase maintenance, security updates, and troubleshooting build issues. To provide a productive (and enjoyable) engineering system going forward, dramatic simplification was needed.

Around the end of 2022 we set about prototyping some ideas, which grew into the new QDK we are releasing in preview today. The basic philosophy behind engineering the new QDK is as follows:

By writing as much as possible in Rust, we have a codebase that can easily target native binaries for any platform supported by the Rust compiler (which we build into our Python wheels) and build for WebAssembly (via wasm-bindgen) to run in the browser. With a focused codebase, the resulting binaries are very small & fast too.

There is a cost to every dependency you take. The cost to learn it, the cost to install it (i.e., build times and disk space), the cost to update & maintain it (i.e., as security issues are reported), the cost to final product size, and so on. Sometimes these costs are worth paying for what you get in return, but the taxes accumulate over time. We are very mindful and minimal in the dependencies we take.

For our new codebase, we have limited the languages used to:

- Rust for the core of the product. This has the ‘batteries included’ benefit of cargo to manage dependencies, builds, testing, etc.
- Python, as we build & ship packages to PyPI as part of the QDK and use Python for scripting tasks in the repo where practical.
- JavaScript (including TypeScript), as we build a VS Code extension and write some web integration code.

For those three languages, we keep dependencies to a minimum, nearly all of which can be seen in the Cargo.toml and package.json files at the root of the repo.

The below high-level diagram shows roughly how this all fits together in our VS Code extension, Python packages, and for general web site integration.

Setting up a build environment for developers (or CI agents) should be fast. For the new codebase, currently you just install Rust, Python, and Node.js, clone one repo, and run one Python build script.

Developing the product should be fast. When working on the core compiler Rust code, the development inner-loop is often as fast as clicking ‘run’ on a unit test in VS Code via the excellent “rust-analyzer” extension. When working on the TypeScript code for the VS Code extension, with “esbuild” running in watch-mode it’s as quick as saving the changes and pressing F5 to launch the Extension Development Host.

The build infrastructure should be easy to keep working. Our CI and build pipeline use the same ‘build.py’ script in the root of the repo that developers use locally to build & test.

Last but certainly not least, is to avoid the extraneous. Every feature added should have a clear need and add significant value. This provides for a more streamlined & intuitive product for the customer, and a less complex codebase to do further development in.

We’re pretty proud of the result. It’s no exaggeration to say the new Azure Quantum Development Kit is 100x smaller, 100x faster, available on Windows, Mac, Linux, and the web, and is a greatly simplified user experience.

As outlined above, the existing QDK results in over 180MB of binaries locally once a project is fully built and all dependencies installed. The VSIX package for our new VS Code extension is currently around 700KB and includes everything needed for Q# development in VS Code. (If you ‘pip install’ our Python packages to work with Q# via Python, that’s around another 1.3MB). Installation typically takes a couple of seconds with no other dependencies. If you have VS Code, (and Python/Jupyter if desired), you’re ready to install.

We have examples of programs that would take minutes to compile in the existing QDK. Those same programs are now measured in milliseconds in the new QDK. The language service is so fast, most operations are done on every keystroke and feel instant. The simulator can run 1000s of ‘shots’ per second for many common algorithms on a good laptop.

The build pipelines for the existing QDK take between 2 – 3 hours to complete, are fragile, and issues often require coordinated check-ins across multiple repos. For the new QDK, all code is in one repo, and we build, test, and push live to our online playground in around 10 mins on every commit to main. Our publishing pipeline uses largely the same script.

We’ve built an extremely fast & reliable installation, language service, compiler, and debugger. Oh, and it all works inside the browser too!

A couple of years ago VS Code introduced VS Code for the Web (https://code.visualstudio.com/docs/editor/vscode-web), with the ability to run the IDE in a browser with no local install, such as at https://vscode.dev or by pressing “.” when in a GitHub repo. By building our extension entirely as a web extension ALL our features run equally well in VS Code desktop or in the browser.

By way of example, the below screenshot shows loading the editor in the browser by visiting https://vscode.dev, running a Q# file under the debugger, viewing the quantum simulator output in the Debug Console, while also signed in to an Azure Quantum Workspace shown in the Explorer sidebar (to which the current program could be submitted) – all without anything needing to be installed on the local machine.

We think the improvements in the user experience for the new QDK really are a quantum leap (bad pun intended).

This is an early preview, and we still have several features to add before we get to our ‘stable’ release, some of the main ones being:

- Multi-file support: For this preview all code for a Q# program needs to be in one source file. (With Q#, you can simply ‘concat’ source files together if need be).
- Richer QIR support: This preview currently can compile programs for hardware that supports the QIR base-profile which, as the name suggests, provides for a basic level of capabilities. With some hardware starting to support more advanced capabilities (currently being specified in the QIR Adaptive Profile), we will be adding support for that also. (Note that running in the simulator isn’t restricted to these profiles and can run any Q# code).
- Migration: Being not entirely backwards compatible with the existing QDK, we also have a lot of work to do on updating samples & documentation. (The “Differences from the previous QDK” page on our wiki will highlight changes and how to migrate code).

Once the core product is solid, we have a laundry list of further features and Q# language improvements we want to get to, which you can view and contribute to on our GitHub repo.

The existing QDK (https://learn.microsoft.com/en-us/azure/quantum/install-overview-qdk) is still fully supported and should be used if the new QDK Preview doesn’t meet your needs or is changing too frequently as we iterate towards our stable release.

We’d love for you to give it a try and give us your feedback. The installation guide and other getting started documentation is currently on our GitHub wiki at https://github.com/microsoft/qsharp/wiki/Installation. You can report any issues, weigh in on feature requests, or contribute code on that same GitHub repo.

The post Introducing the Azure Quantum Development Kit Preview appeared first on Q# Blog.

]]>Introduction

There are numerous architectural decisions to consider when building quantum computers, which have the potential to address real-world computational challenges like quantum chemistry and quantum cryptography. Researchers worldwide are engaged in developing various aspects of quantum computer architecture. Microsoft Azure Quantum Resource Estimator plays a pivotal role in assessing how different combinations of design choices might impact the performance of upcoming quantum computers.

The post Modeling quantum architecture with Azure Quantum Resource Estimator appeared first on Q# Blog.

]]>There are numerous architectural decisions to consider when building quantum computers, which have the potential to address real-world computational challenges like quantum chemistry and quantum cryptography. Researchers worldwide are engaged in developing various aspects of quantum computer architecture. Microsoft Azure Quantum Resource Estimator plays a pivotal role in assessing how different combinations of design choices might impact the performance of upcoming quantum computers.

Azure Quantum Resource Estimator was designed to assist researchers in estimating computational time and the requisite number of qubits based on diverse assumptions regarding hardware quality and error correction strategies. We have used a more powerful version of the Resource Estimator for many years as an internal tool for analyzing architectural decisions in our own quantum program. We have incorporated new options to offer similar capabilities to the Azure Quantum users.

Continuing our commitment to enhancing the tool’s capabilities, we have recently introduced several new features. These updates include the ability to customize error budget distributions and implement custom distillation units.

In this article, we provide an overview of fundamental concepts related to the architecture of quantum computers, exploring their influence on the necessary resources and the capabilities offered by Microsoft Azure Quantum Resource Estimator to model these intricate structures.

One can represent quantum computing stack as follows:

Scientists all around the world are collaborating on refining individual stack components and integrating them cohesively. The multitude of decisions made at each stack layer, coupled with diverse quantum algorithms, results in a vast array of possible combinations that warrant evaluation and comparison. Microsoft Azure Quantum Resource Estimator helps to assess and compare those combinations efficiently. You can submit your quantum program (such as in Q# or Qiskit) or Quantum Intermediate Representation (QIR) while specifying particular characteristics of a proposed quantum computing stack:

- Microarchitecture of magic state distillation and physical qubit parameters
- Quantum error correction
- Error budget allowed for the program

And the Resource Estimator will calculate rQOPS of the architecture and, qubits and time required for the application given this combination.

In the current landscape of 2023, a significant challenge lies in the pursuit of rapidly responsive, stable, and scalable physical qubits to enable impactful applications in chemistry and materials science (read more at Communications of the ACM, 2023). Researchers are exploring a range of design possibilities, including instruction sets as well as diverse anticipated levels of speed and fidelity for these qubits.

In the process of resource estimation, a higher degree of specificity becomes necessary, encompassing various times for distinct actions involving qubits. Our recent updates here involve:

- Providing separate `process` and `readout` error rates for qubit measurements;
- Adding an idling error rate to the model;
- Separating `Clifford` and `readout` error rates in magic state distillation formulas.

Azure Quantum Resource Estimator supports:

- predefined qubit types with characteristics expected or targeted,
- and custom qubit types where users could provide specific times and error rates.

Here is an example of specifying a custom qubit type:

`from azure.quantum import Workspace from azure.quantum.target.microsoft import MicrosoftEstimator from azure.quantum.target.microsoft.target import MeasurementErrorRate #Enter your Azure Quantum workspace details here workspace = Workspace( resource_id="", location="" ) estimator = MicrosoftEstimator(workspace) params = estimator.make_params() params.qubit_params.name = "qubit_maj_ns_e6" params.qubit_params.instruction_set = "Majorana" params.qubit_params.one_qubit_measurement_time ="150 ns" params.qubit_params.two_qubit_joint_measurement_time = "200 ns" params.qubit_params.t_gate_time = "100 ns" params.qubit_params.one_qubit_measurement_error_rate = MeasurementErrorRate(process=1e-6, readout=2e-6) params.qubit_params.two_qubit_joint_measurement_error_rate = 1e-6`

# test quantum program from qiskit import QuantumCircuit circ = QuantumCircuit(3) circ.crx(0.2, 0, 1) circ.ccx(0, 1, 2) job = estimator.submit(circ) job.wait_until_completed() result = job.get_results() print(result)

We use a simple Qiskit circuit for this blog post for short. To learn how to run resource estimator for Q# algorithms, go to the Create the quantum algorithm section at the resource estimator documentation.

You can learn more about this at the Physical qubit parameters section of documentation.

Error correction in classical computing involves creating duplicates or checksums of data and periodically verifying these duplicates. In quantum computing, error correction is more complex due to the impossibility of copying information (see No-cloning theorem).

However, the basic principles are similar to classical error correction: achieving greater accuracy in computations requires extra resources (additional qubits and time). Analogous to classical computing, the extent of extra information can be quantified using a code distance (see Hamming distance). Opting for a higher code distance necessitates more resources but leads to enhanced computation fidelity.

We use the following exponential model for error rate suppression:

Here, *p* – is the physical qubit error rate (computed from various physical error rates above), *P* – is the (output) logical error rate provided by an error correction scheme, *d* – is the code distance, and *a* and *p* – are coefficients specific for the scheme, called crossing prefactor and error correction threshold correspondingly. As you can see, if *p < p**, then *P* increases when *d* increases.

As mentioned above, resources involved would also grow with increasing of *d*. Here are examples for the Floquet scheme (arxiv.org:2202.11829):

```
logicalCycleTime=3 * oneQubitMeasurementTime * d,
physcialQubitsPerLogicalQubit=4 * d^2 + 8 * (d -1).
```

Microsoft Azure Quantum Resource Estimator supports two predefined schemes of error correction: surface and Floquet, as well as custom error correction schemes.

The surface scheme is based on the premise that physical qubits form a lattice on a surface. In this arrangement, we have two types of qubits: data qubits, which play a role in primary algorithm computations, and measurement qubits, which serve as supplementary components. These qubits are organized in a checkerboard pattern, where each data qubit is surrounded by four measurement qubits, and vice versa. Boundary qubits are conceptually linked to qubits on the opposite side, creating a toroidal structure. Stabilization measurements are performed on corresponding qubits along different axes with a specific geometric pattern. This scheme exhibits versatility, as it can be applied to both gate-based qubits and Majorana qubits.

Conversely, the Floquet scheme places more stringent demands on the geometric arrangement of qubits but offers significant advantages in terms of time and space efficiency (as detailed in arxiv.org: 2202.11829). Qubits must be arranged in a grid with three neighbors each, and it should be possible to color the plaquettes with just three colors. Honeycomb is an example of such structure. Subsequently, stabilization measurements are executed periodically with a period of three, involving joint measurements between one qubit and one of its three neighbors at a time. This geometric structure aligns well with Majorana qubits and can bring substantial benefits when applied to them.

If using Microsoft Quantum Development Kit, one can submit the following job with different error correction schemes:

`from azure.quantum import Workspace from azure.quantum.target.microsoft import MicrosoftEstimator, ErrorBudgetPartition # Enter your Azure Quantum workspace details here workspace = Workspace( resource_id="", location="" ) estimator = MicrosoftEstimator(workspace) params = estimator.make_params() params.qec_scheme.error_correction_threshold = 0.01 params.qec_scheme.crossing_prefactor = 0.07 params.qec_scheme.logical_cycle_time = "3 * oneQubitMeasurementTime * codeDistance" params.qec_scheme.physical_qubits_per_logical_qubit = "4 * codeDistance * codeDistance + 8 * (codeDistance - 1)"`

# there are two predefined schemes: floquet_code and surface_code # the floquet_code can be applied for models with Majorana instruction set # the surface code can be applied to both: Majorana and gate-based instruction sets. # you can just specify the name of a predefined scheme #params.name = "surface_code" # test quantum program from qiskit import QuantumCircuit circ = QuantumCircuit(3) circ.crx(0.2, 0, 1) circ.ccx(0, 1, 2) job = estimator.submit(circ) job.wait_until_completed() result = job.get_results() print(result)

See more at the Quantum error correction schemes section of the Resource Estimator documentation.

To harness the benefits of quantum computing over classical counterparts, it is essential to devise quantum gates that cannot be effectively simulated on traditional non-quantum hardware. These operations can be envisioned as rotations of the Bloch sphere, executed at arbitrary angles. In this context, classical non-quantum hardware is proficient at executing rotations limited to angles of 90 degrees. This particular set of operations can be encapsulated by the concept of Clifford gates.

By combining a set of Clifford gates with additional non-Clifford gates, it is possible to create a universal set of quantum gates. This means that any quantum gate can be efficiently approximated to the desired precision using a predefined sequence of operations from this set. This approximation process is commonly referred to as rotation synthesis.

Various quantum computer architectures might incorporate distinct non-Clifford gates within the rotation synthesis. These particular gates are often termed “magic gates,” and their associated states are labeled as “magic states.” A notable challenge in quantum computing stems from its inherent inability to duplicate data. Consequently, generating these magic states once and employing them indefinitely is unfeasible. Instead, each usage demands the creation of fresh instances of these magic states. This intricate procedure of generating magic states with a specified level of precision is known as magic state distillation.

One of popular choices for the magic state is the T-gate defined as follows:

In Azure Quantum Resource Estimator, we assume that the T-state is used as the magic state.

Algorithms designed for T-state distillation play a crucial role in enhancing qubit accuracy. These algorithms utilize multiple input qubits with low accuracy to generate an output qubit with higher accuracy. This process can encompass multiple rounds of distillation, progressively refining qubit quality until the desired standard is attained. Each round employs a specific algorithm known as a distillation unit.

For each distillation unit, one should specify how it improves the qubit quality and what resources it will consume for the runtime: qubits involved, and time spent. Those characteristics depend on the code distance used for the distillation, physical quality of qubits and accuracy provided originally or by the previous round of distillation.

It’s important to note that distinct sequences of distillations can yield greater efficiency for different qubit qualities (output error rates). In other words, depending on the initial error rate of an input qubit, specific sequences of distillation may be more adept at achieving the required error rate while utilizing fewer resources.

Here are examples of distillation units described in Assessing requirements to scale to practical quantum advantage:

Distillation unit |
# input Ts |
# output Ts |
acceptance probability |
# qubits |
time |
output error rate |

15-to-1 space-eff. physical |
15 |
1 |
1−15p |
12 |
46t |
35p |

15-to-1 space-eff. logical |
15 |
1 |
1−15P |
20n(d) |
13τ(d) |
35P |

15-to-1 RM prep. physical |
15 |
1 |
1−15p |
31 |
23t |
35p |

15-to-1 RM prep. logical |
15 |
1 |
1−15P |
31n(d) |
13τ(d) |
35P |

When we are estimating the performance of a particular quantum algorithm on a specific quantum computer (which has a certain qubit quality and operation speed), we might need different target output error rates. Considering two distillation units (15-to-1 space-efficiency and 15-to-1 RM preparation) and various code distances for each round, there could potentially be thousands of combinations to evaluate.

Various research groups are actively working on developing distillation algorithms, and their approaches might differ from the ones described earlier. For instance, some algorithms could generate multiple output T-states, potentially reducing costs by sharing resources or enabling parallelization. With dozens of distillation algorithms in consideration, an intriguing opportunity arises to compare them against each other, encompassing diverse physical qubit attributes and algorithm variations. This comparative analysis could provide valuable insights into their relative effectiveness.

Microsoft Quantum invites researchers to assess the resources needed for their unique distillation approaches. Presently, we offer support for two established distillation units named `15-1 RM` and `15-1 space-efficient`, in addition to the flexibility to define custom distillation unit specifications.

Here is an example of calling the Resource Estimator with custom distillation unit specifications:

```
from azure.quantum import Workspace
from azure.quantum.target.microsoft import MicrosoftEstimator
from azure.quantum.target.microsoft.target import DistillationUnitSpecification, ProtocolSpecificDistillationUnitSpecification
# Enter your Azure Quantum workspace details here
workspace = Workspace(
resource_id="",
location=""
)
estimator = MicrosoftEstimator(workspace)
params = estimator.make_params()
specification1 = DistillationUnitSpecification()
specification1.display_name = "28-2"
specification1.num_input_ts = 2
specification1.num_output_ts = 28
specification1.output_error_rate_formula = "35.0 * inputErrorRate ^ 3 + 7.1 * cliffordErrorRate"
specification1.failure_probability_formula = "15.0 * inputErrorRate + 356.0 * cliffordErrorRate"
physical_qubit_specification = ProtocolSpecificDistillationUnitSpecification()
physical_qubit_specification.num_unit_qubits = 12
physical_qubit_specification.duration_in_qubit_cycle_time = 65
specification1.physical_qubit_specification = physical_qubit_specification
logical_qubit_specification = ProtocolSpecificDistillationUnitSpecification()
logical_qubit_specification.num_unit_qubits = 20
logical_qubit_specification.duration_in_qubit_cycle_time = 37
specification1.logical_qubit_specification = physical_qubit_specification
specification2 = DistillationUnitSpecification()
specification2.name = "15-1 RM"
specification3 = DistillationUnitSpecification()
specification3.name = "15-1 space-efficient"
params.distillation_unit_specifications =[specification1, specification2, specification3]
# test quantum program
from qiskit import QuantumCircuit
circ = QuantumCircuit(3)
circ.crx(0.2, 0, 1)
circ.ccx(0, 1, 2)
job = estimator.submit(circ)
job.wait_until_completed()
result = job.get_results()
print(result)
```

For further information, please refer to the Distillation Units section in the Resource Estimator documentation.

Quantum algorithms inherently embrace a probabilistic aspect, and the execution process is susceptible to errors. Typically, an algorithm can be run multiple times to reveal its probabilistic behavior and mitigate errors. When sending an algorithm for quantum computer execution, it’s crucial to determine how many runs are required to achieve a desired confidence level. Consequently, a probability of error in the final computations is defined, termed the error budget. Seeking a higher probability of success can involve employing the error correction methods elucidated earlier. However, it is important to note that this pursuit of higher success probability comes at an elevated cost, involving more qubits and a longer runtime.

We could categorize errors into three categories by occurrence:

- During distilling magic states
- During performing rotations
- While executing the algorithm

As an initial approximation, we might distribute the error budget evenly among these three sources. Yet, certain algorithms could demand varying amounts of rotations or magic states. Hence, there could be a preference to readjust the error budget to align with the intricacies of the algorithm in question.

If using Microsoft Quantum Development Kit, one can submit the following job with different error budget options:

```
from azure.quantum import Workspace
from azure.quantum.target.microsoft import MicrosoftEstimator, ErrorBudgetPartition
# Enter your Azure Quantum workspace details here
workspace = Workspace(
resource_id="",
location=""
)
estimator = MicrosoftEstimator(workspace)
params = estimator.make_params()
```

# make an estimate with specific error budget
# for each of the three error sources:
params.error_budget = ErrorBudgetPartition(logical=0.001, t_states=0.002, rotations=0.003)

# for uniformly distributed error budgets,
# # you can specify just the total as single number:
# params.error_budget = 0.001
# test quantum program
from qiskit import QuantumCircuit
circ = QuantumCircuit(3)
circ.crx(0.2, 0, 1)
circ.ccx(0, 1, 2)
job = estimator.submit(circ)
job.wait_until_comps()
print(result)leted()
result = job.get_result

Read more about error budgets in the documentation.

The Azure Quantum team is dedicated to ongoing enhancements of the Resource Estimator. This valuable tool serves both our internal teams and external researchers in the endeavor to design quantum computers. Expanding the scope of modeling capabilities remains a key priority for us. We eagerly welcome your feedback on the specific custom options you require for estimating your quantum computer resources. Your insights will greatly contribute to refining our tool and making it even more effective for the quantum community.

There are many ways to learn more:

- Visit our technical documentation for more information on Resource Estimation, including detailed steps to get you started.
- Login to the Azure Portal, visit your Azure Quantum workspace, and try an advanced sample on topics such as factoring and quantum chemistry.
- Dive deeper into our research on Resource Estimation at arXiv.org.

The post Modeling quantum architecture with Azure Quantum Resource Estimator appeared first on Q# Blog.

]]>The post Mentoring capstone projects at the University of Washington appeared first on Q# Blog.

]]>This spring we had the opportunity to mentor two student teams as part of the University of Washington’s NSF Research Traineeship program Accelerating Quantum-Enabled Technologies (AQET). This year-long certificate program offers graduate students training in quantum information science and engineering and includes several courses on different areas of quantum technologies followed by the culminating team project within the UW EE522: Quantum Information Practicum class. For this course the students worked on a quantum-related project under the guidance of mentors from the quantum industry—and that’s where we came in.

We worked on two projects focused on the tools necessary to implement quantum algorithms at scale. As quantum computers evolve from their current noisy intermediate scale quantum (NISQ) era to scalable quantum supercomputers, the programs that run on them will evolve as well, from simple circuits to complex programs that solve sophisticated problems. As part of this progress, we start exploring the practicality of implementing various algorithms to run on quantum computers and the resources required to execute them. We also look at the possibility of generating parts of these programs automatically, borrowing from our experience in classical computing. The students’ projects explored different areas of quantum software development using Microsoft’s Quantum Development Kit (QDK) and Azure Quantum Resource Estimator.

Both teams did a great job, and later this fall they will be presenting their work at IEEE Quantum Week 2023 on Wednesday, September 20. Here is a teaser of their work.

*Students: Chaman Gupta, I-Tung Chen*

*Mentors: Mathias Soeken, Mariia Mykhailova*

The goal of this project was to design a workflow that would convert classical computation description into Q# code that implements it as a quantum computation. For example, the following Q# code for classical computation

` internal function Multiplication2(a : Int, b : Int) : Int {`

` return a * b;`

` }`

would be automatically converted into a quantum circuit that can be used like an operation implemented by the Q# library operation, e.g., MultiplyI.

The QDK samples already had an example of doing this for Boolean function evaluation, so this project targeted integers and arithmetic functions that work with integers: addition, multiplication, and modulo operations.

The image below shows the workflow used in the project.

*Automated oracle synthesis workflow as implemented in the project*

The steps in the workflow are as follows.

- The classical computation that needs to be converted into quantum is defined as a Q# function using the Int data type and built-in arithmetic for it, as shown in the above code snippet.
- The Q# compiler converts this function definition into equivalent Quantum Intermediate Representation (QIR) code.
- The automatic synthesis program reads the QIR code and converts it into an XAG (XOR-AND-Inverter graph) representation.
- This representation is optimized using the Mockturtle library—a C++ library for logic network manipulations. Up to this point, all the code represented remained classical.
- Finally, the automatic synthesis program converts the optimized XAG representation into a corresponding sequence of quantum logic gates in QIR that implements the original classical computation.

The generated quantum code can be executed via any tool that accepts QIR programs as input. This project used QIR Runner to run the simulation of the generated code for small inputs, and Azure Quantum Resource Estimator to estimate the resources required to run the code for larger inputs. Code samples and more technical details of this work can be found here.

When compared with handcrafted Q# library operations implementing similar quantum computations, our automatically generated code was faster but required more qubits to run. This shows that automatic generation of quantum code is a promising avenue of producing reliable and performant code in an efficient manner. The next steps in this direction would be exploring the ways to optimize the generated code even further and adding support for more arithmetic types and operations, such as floating-point arithmetic.

*Students: Ethan Hansen, Sanskriti Joshi, Hannah Rarick*

*Mentors: Wim van Dam, Mariia Mykhailova*

In this project, the students explored quantum multiplication algorithms and compared their efficiency in terms of runtime, qubit numbers, and T-gates required to run them on large inputs.

The project built on prior work by Gidney and it compared three quantum implementations of algorithms for multiplying *n*-bit integers:

**“Schoolbook” multiplication**is the standard approach of multiplying the multiplicand by each bit of the multiplier and then adding together the results with proper shifts in positions. For n-bit integers this algorithm has complexity proportional to n^{2}.**Karatsuba multiplication**splits the inputs u and v into halves u= a + 2^{h}b and v = x + 2^{h}y and computes their product u∙v = (a + 2^{h}b)(x + 2^{h}y) as a∙x + 2^{2h}(b∙y) + 2^{h}[(a + b)∙(x + y) − ax − by], thus reducing multiplication of two n-bit integers to three multiplications of two (n/2)-bit integers. The time asymptotic complexity of this algorithm is proportional to n^{1.58}.**Windowed multiplication**is applicable when a quantum integer is to be multiplied by a classical constant and utilizes classically precomputed lookup tables to merge parts of operations.

Classical multiplication is an arithmetic operation that is often taken for granted when discussing algorithms. However, when implemented on a quantum computer, it incurs quite a lot of overhead in terms of both additional qubits and extra operations performed to implement the computation as a unitary transformation that preserves coherence.

Using Azure Quantum Resource Estimator, the team calculated how the required resources depend on the multiplication algorithm used and on the size of the integers. The estimates showed that windowed multiplication uses slightly more qubits compared to the schoolbook algorithm but that it runs faster. Karatsuba’s algorithm uses more qubits and has longer runtimes compared to the schoolbook algorithm for inputs up to several thousand bits long. Eventually though, for large enough input sizes, the runtime of Karatsuba’s algorithm caught up with that of the schoolbook algorithm. Put together, these results refine our understanding of the performance of these three algorithms, and how in different settings different algorithms should be preferred.

This project provides a framework of applying similar resource estimation techniques to comparing different implementations of other quantum arithmetic primitives, such as floating-point functions, and other quantum subroutines.

The students found their projects interesting and enjoyable. They mentioned that the work on capstone projects helped them discover and learn important topics in quantum computing, such as oracle synthesis and resource estimation of algorithms, and broaden their understanding of the current state of the field. The students got valuable insights into their projects using Azure Quantum Resource Estimator, and their feedback on their experience helped us improve the tool for the future users. It has been a pleasure to mentor these teams, and we are looking forward to next year’s capstone projects!

- Are you planning to attend IEEE Quantum Week? Then check out the presentations of these projects at the 3rd International Workshop on Quantum Software Engineering and Technology on Wednesday, September 20
^{th}and learn more about resource estimation for quantum algorithms at the Quantum Resource Estimation workshop on Thursday, September 21^{st}. - Check out the existing code sample for automatic oracle generation and the students’ work that adds support for integers and integer operations.
- Learn more about quantum resource estimation, Azure Quantum’s Resource Estimator, and the technical background behind this tool.

The post Mentoring capstone projects at the University of Washington appeared first on Q# Blog.

]]>