I’ve had an interest in cooking for some years now, partly because my work schedule usually offered me the flexibility to be home to prepare most of our weekday family dinners. A couple of my good friends have been doing the same, and up until now our modest abilities in the kitchen were self-taught. This winter the three of us decided to up our game and we all enrolled in a continuing education course, Culinary Arts 101, at a local college in Toronto. Our professor, an affable French-educated German chef, has given us many tips and techniques for how to use our utensils properly and how to cook more efficiently while producing popular, flavourful dishes. Now that I’ve completed the course, my family has given me rave reviews on almost every meal I’ve brought home.
Thinking about use of utensils, efficiency and recipes had me also thinking of parallels to software development. After all, programmers have to work efficiently with the tools at their disposal, and what is a recipe if not a kind of algorithm? Precision is important in both disciplines, creativity is often rewarded, and you can work at different levels of abstraction depending on the result you’re trying to achieve. So, let’s use cooking as a way of considering how software development has changed over the past decades—and where quantum software development needs to go in the coming years.
Cooking, and coding, by the line
If you’re cooking a dinner for two from scratch, you can be very detailed about your ingredients and recipes, and each dish can be a special, unique creation. But if you’re working in a restaurant, or preparing a banquet for hundreds of guests, you will have to be more abstract—working at volume without sacrificing quality and preparing components of your dishes in advance in order to mass-produce good meals for everyone. Similarly, software development in the classical (non-quantum) world has followed a path that started with coding ‘from scratch’ and has evolved to high levels of reuse and abstraction.
I’ve written before about my computer programming class in high school—this class was also my first introduction to binary notation, in which all numbers are represented as sequences of ones and zeros. Performing operations to combine binary numbers is the root of classical computing, and all computer programs today must eventually be reduced to binary digits (bits) in order to run. The magic is in what happens in between the spreadsheet, game, or smartphone app you see and the low-level bits that actually do all the work.
When I was in university, assembler or machine-language was still a mandatory part of the computer science curriculum. This was a step up from binary where we could work with simple data objects and logical operators (AND, OR, NOT, etc.)—but the programs were very particular to the hardware we worked with and, of course, had to be translated into binary code. It was programming ‘from scratch’ and although we learned a lot about machine-specific instructions, it was tedious to write even the simplest programs. Soon we moved on to high-level programming languages that allowed us to represent complex data structures and write programs to perform real-world tasks. These were languages like COBOL for business processing, or FORTRAN and Pascal for more scientific and mathematical applications. High-level language programs are processed through a compiler, which translates the code into—you guessed it—binary instructions that can be executed on the hardware. This was our first experience with abstraction. A program written in COBOL, FORTRAN or Pascal is independent of hardware, and compilers exist for different kinds of machines, allowing the same program to run on different computers.
Data farm to table
The rest of the story of the evolution of software development is really just about more and more abstraction and reusability. All our early high-level languages introduced the idea of subroutines—independent pieces of software designed to perform a specific task, which could be reused in many ways with different parameters. (One of my professors published a book called Steal this Code!—a collection of subroutines for common mathematical problems in the public domain.) In the late 1980s and 1990s, object-oriented programming became all the rage—putting data objects at the centre instead of the algorithm—and surrounding the objects with subroutines called methods that would operate on the data. This turned out to be a better way of doing abstraction and reuse. C++ was a popular object-oriented language for a while, later supplanted by Java and all its variants still in use today. Meanwhile, middleware services like databases and application servers provided additional standardization and reusability.
When you hear now about “containerization,” “low-code” and even “no-code” development, rest assured—it’s nothing new. Abstraction and reuse of existing programming assets is really all it comes down to. Development tools have also evolved to keep pace with increased abstraction. Where programmers formerly wrote every line of code for themselves, now visual development tools are the norm. These allow programs to be put together graphically from pre-built components, and much of the code can be generated automatically. Programmers used to have to cook from scratch. Now, with reuse, abstraction, and good tools, they can produce banquets at the same level of effort. The resulting explosion in productivity that has occurred in business, engineering and science speaks for itself.
But, as I’ve written before, some problems are still too hard to solve even with the best software we have today. Of course, here’s where quantum computing comes in. Qubits release us from the constraints of binary code. They are ushering in a completely different approach to programming and will eventually deliver processing power we can only imagine today. But let’s be realistic for a moment. Quantum computers are still in their infancy and a lot of hard work needs to be done before they realize this potential. Moreover, even at scale, they will only be useful for specific kinds of applications. The word processor I’m using to write this post will not be redeveloped for quantum—it won’t make my keystrokes any faster. Instead, quantum computers will be very good for solving problems that are computationally intensive—difficult problems in pure and applied mathematics, or complex simulations in scientific research and engineering. The future will be hybrid—a mix of classical and quantum—and this will apply to application development methods and tools as well. Programmers will need to bring the best of both worlds together to solve problems we don’t even know exist yet.
Back in the kitchen
The infancy of quantum computing implies that quantum programmers are also back at the level of cooking from scratch. A few months ago, I sat in on a half-day introductory quantum computing workshop—an exciting but at times bewildering tour of qubits, quantum gates and circuits. It will take a bit more time for me to fully grasp the nuances of this new programming model and maybe later I will try a more technical post on the subject. For now, what I see is that gates and circuits are the quantum equivalent of the logical operators I learned in assembler programming decades ago. Better, yes—and we can do a lot more with them than we could back then—but it is effectively coding in quantum machine language, if you will. It is nice to see that there are visual development toolkits available at this level and many organizations are experimenting with programming code to perform optimization and simulation tasks—with some commercial successes already. However, it’s still a tedious approach to put together useful software. The quantum “killer app” would appear to be quite a way off.
Recently, though, I saw a demonstration of new quantum software development tools that bring us up to the next level of abstraction—a high-level language that looks very similar to those I used back in the 1990s. I could immediately see the potential productivity gains. Programming looked much more intuitive, and the programs could be compiled for different kinds of quantum hardware. This step up in the evolution of quantum software development has taken far less time than its equivalent in the classical world. It’s highly encouraging, although there’s still a lot more work to be done. Quantum visual development tools do not match their classical counterparts, and code reusability is not yet where it needs to be. At least in part, this is due to the current limitations of quantum hardware. In the quantum world just as in the classical, hardware and software will have to evolve together.
Reusable feast
As quantum hardware gets bigger and more reliable in the coming years, we will see an accelerated evolution in quantum software development. Quantum software will follow the same path of abstraction and reuse of components. Hardware will standardize, and compilers will get better at translating encoded business problems into quantum machine language. Although much of this hasn’t been invented yet, the industry can learn from—and improve upon—the progress that has been achieved in the classical world. When it does, we will see an explosion of revolutionary quantum-based solutions for mathematical optimizations, engineering and scientific simulations, even AI and machine learning.
Before long, we will get to the point where quantum programmers cook up gourmet banquets from scratch—and we will all be able to feast on the results.
Multilingual post
This post is also available in the following languages
Stay up to date with what matters to you
Gain access to personalized content based on your interests by signing up today