# Q# Compiler Optimizations

Rory

*This post was written by Rory Soiffer, an intern with Microsoft’s Quantum group during the summer of 2019.*

*See the Q# Advent Calendar 2019 for more cool posts!*

## Quantum Optimizations

When programming a quantum computer today, there’s one thing you always have to remember. The first and last fact about quantum computers is that

#### quantum computers are small.

Well, they’re physically large, at least the freezers are, but they have a small number of qubits. Any code we execute on a quantum computer today must fit within a tiny number of qubits and can use no more than a few thousand operations before things start to decohere. It’s important to optimize our algorithms down to the last gate.

Optimizing at the level of a single gate is painfully difficult. For a large, high-level program like those Q# is built for, it’s almost impossible. It’s like trying to write python code designed to minimize the number of assembly instructions the computer will execute. It just isn’t possible to do by hand. Thankfully, researchers have written a number of different quantum circuit optimization algorithms that do this for us. These algorithms are much better than humans at optimizing complex circuits, often reducing the number of gates by half or more.

Unfortunately, these circuit optimization algorithms run on, well, quantum circuits. That doesn’t sound so bad – after all it’s in the name, right? That’s true, quantum circuits are fairly common, and are a good class of programs to write. But not every quantum algorithm can be expressed as a circuit.

## Complex Quantum Algorithms

Consider the following code snippet:

```
using (anc = Qubit()) {
repeat {
H(anc);
T(anc);
CNOT(target,anc);
H(anc);
Adjoint T(anc);
H(anc);
T(anc);
H(anc);
CNOT(target,anc);
T(anc);
Z(target);
H(anc);
let result = M(anc);
} until (result == Zero);
}
```

This implements the rotation gate V = (I + 2iZ) / sqrt(5). The algorithm implements the gate exactly, but it isn’t deterministic. It has a chance of success after each iteration, with expected termination after 8/5 repetitions. This doesn’t fit with the standard model of a quantum circuit – there’s classical control flow, the algorithm isn’t guaranteed to terminate, and we don’t know which gates to run ahead of time. But we still want to optimize it: even if the code snippet above is optimal, once we surround it with other quantum gates on the same qubits, we might be able to find ways to improve it.

But even beyond that, we can imagine programs that are hugely more complex than the sample code above. If you have hundreds of functions calling one another, each allocating different numbers of qubits, how can you be sure the computer will use an optimal qubit layout? If you’re running a complex machine learning optimizer, where the next gate to apply depends on the results of a hundred prior measurements, how do you optimize gates across many the different branches of the program?

The solution is to extend the optimization algorithms we have to work on arbitrary quantum algorithms, not just pure quantum circuits. We could do this by just searching the program for sub-algorithms that look like circuits and optimizing them individually, but this won’t get us very far. We have to have an algorithm that understands the structure of the Q# program as a whole. We have to analyze execution paths, figure out the context of quantum functions, and optimize the classical and quantum sides of the code at the same time.

## Q# Optimizations

My job over the summer of 2019 was to build the framework for these optimizations. I implemented some basic classical and quantum optimizations (constant propagation, function inlining, adjoint gate cancelling, etc.), but more importantly, I set up a structure that people can use in the future to implement further optimizations. See here for the actual code.

Each optimization builds off of all of the others. Constant propagation improves function inlining – once you inline a function with constant arguments, you can propagate those arguments into the body. Removing unused variables helps the quantum circuit optimizer, as removing an unused qubit can decrease the runtime of an optimizer. Loop unrolling can allow a circuit optimizer to join two iterations of the loop together to cancel more gates. With these basic optimizations implemented and working together, we can now move on to the more complex tasks of optimizing entire execution paths or distributing error among the rotation gates.

## Implementation

There are a few interesting things I discovered from writing all this code. First, Q#’s AST (Abstract Syntax Tree) wasn’t quite designed for this task – it has a bunch of extra data meant for checking that the code’s syntax and types are correct, and for reporting errors in useful ways, which makes creating or modifying elements much harder when you don’t need this information. At the same time, the AST had to be minimally complex, as it was the data structure sent to the runtime environment to simulate the quantum program. This AST might eventually be split in two – one for internal use, with customizable nodes and extra room for data, and one to send to the simulator. Second, the Transformation design pattern in the Q# compiler is great – it makes it really easy to traverse the AST, modifying exactly the nodes you care about. The runtime performance could use a bit of work, but it’s fast enough to be useful on everything but the largest programs.

Third, and most interesting personally, was about F#’s computation expressions. These language constructs allow the programmer to define custom control flow and behavior, eliminating a ton of repetitive boilerplate code around common patterns. They’re great for expressing sequences (the *seq* expression), asynchronous workflows (the *async* expression), and more. Computation expressions are based on monads (like in Haskell), and are as complicated as they are useful.

I implemented a couple useful computation expressions – a simple *maybe* expression for easily chaining code with *Option* types, and a complex *imperative* expression for simulating imperative code in a completely functional way. To do this, the *imperative* expression has to handle classical state, control flow, runtime exceptions, and more, all in a purely functional context. I use the *imperative* expression to symbolically evaluate Q# functions inside of the compiler, letting me find the result of a function call at compile time instead of at runtime.

## Future Steps

Now that the framework for compiler optimizations is in place, what do we do? The next big step is to integrate this framework with existing quantum circuit optimizers. This is a large challenge – Q#’s AST and the input format for most circuit optimizers look absolutely nothing alike. I implemented a way to bridge this gap, in the form of a simple intermediate format that we can convert Q#’s AST to and from. The intermediate format represents pure circuits without any classical control flow, making it much easier to plug into a circuit optimizer. This format is currently used for a proof-of-concept circuit optimizer that just cancels adjacent adjoint gates. The code that interfaces between this intermediate format and a real-world circuit optimizer still needs to be written.

Another potential path is optimizing quantum circuits during the execution of the program. Just as constant propagation enables other optimizations that only work when given concrete values, actually running the program lets us directly compute these values, which enables those optimizations in even more cases. If we could hook the compiler optimizations into a runtime framework, we could take advantage of this to do JIT compilation.

## 0 comments