I entered university in the fall of 1984 fully expecting to pursue a degree in Computer Science. This was, as many of my elders would point out back then, “the future” and I figured it would launch me into a brilliant career as a software developer. Now, the relatively unique thing about my university was that it had a Faculty of Mathematics, and it located Computer Science as one department within that faculty. (We were told that there was at the time only one other university in the world with a faculty of mathematics—in Moscow.) This meant that in order to study CS, I had to earn a Bachelor of Mathematics degree.

This particular faculty was divided into five departments—Pure Mathematics (Algebra and Number Theory), Applied Mathematics (Functions and Calculus), Statistics, CS, and Combinatorics & Optimization. That last one, C&O, was a bit weird and mysterious—enumeration theory, optimization, plus a mixed bag of disciplines that didn’t really fit anywhere else, such as graph theory, game theory and cryptography. For some reason, the mandatory second-year course caught my imagination. I decided to change to a joint major in C&O with CS, enabling me to take a subset of the required courses from both departments.

Almost three and half decades after graduating, and well into a career that ended up having very little to do with software development, I find myself dusting off those old C&O textbooks and lecture notes (yes, I still have them). The emerging field of quantum computing is having a profound impact on cryptography while optimization is turning out to be one of the early beneficial use cases for applying quantum programming techniques. Finally, I get to put my academic studies to practical use!

**The crux**

So, what’s optimization all about and why is it so important? Well, Wikipedia says it’s “the selection of a best element, with regard to some criterion, from some set of available alternatives.” Huh. OK, Wikipedia goes on to explain, “optimization includes finding ‘best available’ values of some objective function given a defined domain including a variety of different types of objective functions and different types of domains.” Hmm. Maybe somebody should edit that Wikipedia article for clarity. Now I know why my kids’ eyes glazed over when I told them what I studied back in ancient times.

All right, let’s dispense with the mathematics and look at a couple of practical examples.

One of the most famous optimization problems is the story of the travelling salesperson. You have a salesperson in their home city, who must travel to some number of other cities to make sales calls and finish the trip at a certain destination. The cities are connected by roads, and you know the cost of travelling from one city to another in terms of time spent on the road, fuel price, etc. Can you calculate the route the salesperson should take to visit a specific number of cities and arrive at their destination for the lowest cost? If it’s one salesperson, and a handful of cities, probably yes. What if instead of one salesperson you had many parcel delivery drivers, and instead of a few cities, dozens of warehouses and thousands of destinations across an entire continent? Now, you have a pretty complicated optimization problem.

Let’s try another one. Say you’re a financial advisor and you’re looking after your client’s portfolio. You have a variety of investments—stocks, bonds, etc.—each of which has a potential promise of return and a potential risk of loss. Each also has a cost to acquire the investment. Your client has a fixed amount of money to invest and has told you their risk profile—how much are they willing to lose, and what probability of gains they’re comfortable with. How do you buy investments for your client that will maximize their return while minimizing their risk of loss? Maybe if it’s a small portfolio and an easygoing client you can do it. What if the portfolio is hundreds of millions of dollars, and you add derivatives and other complex financial instruments, and you have thousands of clients with different risk profiles? Suddenly the optimum solution isn’t so obvious.

So, that’s optimization—finding the best solution that minimizes the cost (or maximizes the value) you can achieve given a large number of variables and constraints. Logistics and finance, like the examples above, are two obvious use cases but optimization problems appear pretty much everywhere—whenever an organization must allocate limited resources among competing interests. And, it turns out, it’s not an exact science. Commonplace optimization problems like these are often only solved approximately. Why is that? I won’t go into all the math behind it, but the clue is in that obscure Wikipedia reference above: finding the *best available* solution.

**The craft**

The equations mathematicians use to solve many optimization problems often have multiple possible solutions, some better than others. These are called* local minima*, and the best possible solution is called the *global minimum*. As an illustration, think of a very large rubber sheet, stretched out. At various locations on that sheet, you place marbles of different weights. Each marble will create an impression in the rubber sheet, the depth of the impression varying with the marble’s weight. Now imagine an ant crawling over the sheet, trying to find the heaviest marble with the deepest impression—the global minimum. It will find some marbles of different weights and impressions, representing local minima, and might stumble upon the global minimum—but it might not.

Our ant on the rubber sheet is actually a computer program trying to solve an equation for an optimization problem. In a finite amount of time, it can find a best available solution, but it isn’t guaranteed to find the absolute best. With more time and more processing power, it can do better, but eventually the complexity of these problems bumps up against the practical limits of Moore’s Law and we can’t always be sure we have the best solution. For an idea of the complexity, if we make that travelling salesperson visit only 15 cities exactly once each, you would already have more than 87 billion routes between cities to consider. It’s been estimated that solving for 15,000 towns could take up to two decades on a classical computer.

Mathematicians and computer scientists can devote whole careers to tweaking their algorithms, trying to do slightly better optimizations. A story making the rounds in my university was of a professor who had been hired by a municipality to optimize its garbage collection routes. He worked hard on the problem and came up with a solution, only to find that the city’s garbage truck drivers did better than his model predicted. He went back to his calculations, tried again, and again the truck drivers did better. Finally, he decided to ride along one night, and found that in the early morning hours without traffic, the truck drivers were making short cuts by backing up one-way streets—a creative, if not entirely legitimate, improvement on the mathematics.

**The number crunch**

Optimization problems turn out to be exactly the kind of difficult math that can be handled much more easily by quantum computers. Large quantum computers, when they become available, may well speed up our algorithms and enable mathematicians to find much better optimization solutions—although the required hardware is still quite a few years off and a lot of work still needs to be done to solve the problems of noise and error correction. If you don’t want to wait, and you are able to get access, smaller-scale quantum computers of a few hundred qubits are available today that can still provide incremental improvements over classical methods. And failing that, quantum annealing and quantum-inspired optimization are services available via cloud delivery that can also deliver good intermediate solutions.

*Annealing* is defined in physics as the process of heating a material such as a metal or glass, and then allowing it to cool slowly. This process removes internal stresses from the material, reducing its brittleness and increasing its flexibility. *Quantum annealing* adapts it to the mathematics of qubits in a quantum computer. A quantum annealer builds a mathematical model of your optimization problem in a system of qubits. Then, by gradually tweaking parameters and variables within the model, the system can work its way toward better and better local minima and increase its odds of achieving the global minimum. If you think back to the ant on the rubber sheet, the advantage of an annealer lies in its ability to use quantum effects to minimize the number of times the ant might have to travel uphill out of an impression—and sometimes just tunnel through the sheet from one impression to another, better one.

Quantum-inspired optimization is similar, but instead of using quantum hardware, the qubits are simulated in software. The same principle applies: build a mathematical model of the problem in software that mimics qubits, then let the system evolve by tweaking its parameters until you have your desired solution.

Note that the key phrase is *desired solution*, or, as discussed above, *best available solution*. Small-scale quantum, quantum-inspired and quantum annealing systems still aren’t guaranteed to provide the best possible solution. But they can, under the right circumstances, do better than classical techniques. They can find the same quality of solution faster than a classical computer, or find a better solution given the same amount of time to run. Or, they could handle more complex versions of the problem using more variables, again providing a better or more realistic solution than classical methods.

Real-world problems have already been solved using quantum annealing and quantum-inspired optimization techniques, yielding improvements of 10 to 15 per cent, and sometimes more, over other methods. A telecommunications company has optimized placement of cellular towers over a specific geography. Several financial institutions have optimized portfolios to minimize value at risk. A parcel delivery firm has optimized its driver routes. These are just the beginning; many more use cases will follow.

Quantum computing has the potential to optimize the whole field of optimization. And the best part is, we can start now.